Book Review: Intuitive Eating by Evelyn Tribole and Elyse Resch

I’m carrying around 20 – 30 extra kilograms of bodyweight, and my plan to get rid of it was a fairly standard one: starve myself as long as I can manage. Turns out the science says this is incredibly bad for you, and that it is unlikely to succeed in the long run. The premise of this book is that my body can tell me when, what and how much to eat, if only I would learn to listen to it again. Continue reading “Book Review: Intuitive Eating by Evelyn Tribole and Elyse Resch”

Book Review: Leonardo da Vinci by Walter Isaacson

Who was Leonardo da Vinci? The guy who painted the Mona Lisa, right? Oh and he had some crazy ideas for some machines that never would have worked, right? If, like me, you never gave Leonardo da Vinci much more thought than that, Walter Isaacson’s book “Leonardo da Vinci” will be an interesting read, and will hopefully leave you with some challenging food for thought.

Continue reading “Book Review: Leonardo da Vinci by Walter Isaacson”

Book Review: Everything Happens for a Reason by Kate Bowler

This was supposed to be posted last week, I actually read the book very early in the week (it is quite short) and decided to get a head-start on the next one (Leonardo da Vinci by Walter Isaacson, it is quite long, the review will be a bit late). I heard about Everything Happens for a Reason: and other lies I’ve loved from Bill Gates’ 5 books worth reading this summer. The author, Kate Bowler, is a religious historian at Duke University’s School of Divinity; she did her doctoral thesis on the history of the Prosperity Gospel in America. This book is about how grappling with a terminal diagnosis changed her faith. Continue reading “Book Review: Everything Happens for a Reason by Kate Bowler”

Book Review: The Signals are Talking by Amy Webb

I’ve set myself the goal of reading a book a week, and writing a brief review of it – to evaluate and cement my thoughts on it, not because I want to become a book critic. This week I read “The Signals are Talking“, by Amy Webb. The author is a quantative futurist, which I guess is someone who takes a structured approach to envisioning what the future is going to look like, and this book is a walk through her process for finding and validating trends.

My main takeaway from the book is the concept of differentiating trend from trendy – separate the progress and trajectory of technology from the shiny data-point that is currently in vogue. The author uses flying cars as an example: popular culture, science-fiction and friends, has had an obsession with flying cars (chitty-chitty bang-bang anyone?), and seemingly every generation has had a stab at making one, from Henry Ford’s “Sky Flivver”, to the Kitty Hawk Flyer, and yet we still don’t have a flying car on the mass market. Instead, what we have had is moving sidewalks, public mass transport, and soon, autonomous vehicles. The author shows us how these are all related, and argues that the real trend is autonomous transport, because, once I can get into a vehicle and keep working, it doesn’t really matter if the journey to my next meeting by road takes a little longer than by air. The point is that news, media, popular culture, tend to focus on individual data-points, whereas, if we want to predict the future, we must look a the whole data-set.

The other realisation, for me, is that I need to spend more time paying attention to what the author call’s the fringe – the people tinkering in their garden sheds, doing crazy things that aren’t yet considered shiny objects. Perhaps if DEC had paid a bit more attention to what people were starting to do with their computers, they would have been at the forefront of the PC explosion, not left in its dust. The same can be said for Blackberry, and internet on phones. The point is to keep your ear to the ground, and your mind open.

I found the book a bit harder to take in than last week’s book. I suspect this is because the narrative gradually builds the analytical process that the author herself uses, rather than being a big review of facts. The book has a tonne of great examples from technology history and is a worthwhile read – the true value will be in actually putting it into practice.

Book Review: Why We Sleep by Matthew Walker

So I’m a bit of a workaholic. My wife, gently seeking to change this, sent me an episode of Joe Rogan’s podcast, this episode. The podcast is an interview with Matthew Walker, a sleep scientist (pretty sure he’s Dr Matthew Walker, non-medical, but he doesn’t seem to make a big deal out of that). Lets just say it was an eye-opener about the dangers of sleep deprivation. And he’s written a book, this book.

Titled “Why We Sleep”, the book is definitely written for a wide audience. It doesn’t assume the reader knows anything about sleep, and it isn’t packed full of jargon. The author takes the reader on a journey through what sleep is, how it works, and the purpose it serves.

The book is a really good example of how to get science across to people who might not be accustomed to it. Each concept is introduced with concepts and anecdotes, and then proved with experiments, and empirical data. This was one of the most enjoyable aspects of the book: seeing how scientists go about expanding our knowledge of the universe and everything in it.

Walker describes at length how damaging sleep deprivation is for humans. There are associations – and in some cases demonstrable causalities – with everything from dementia, to cancer, and just about every other ailment. Part way through the book I did find it a bit tedious, and it felt a bit melodramatic, but that could be my own denial about the damage I do to myself by not sleeping enough. By the end of the book I had come to appreciate that the author is trying to get an important point across: that we must take sleep seriously.

The last part of the book covers a number of ideas, or suggestions to restore sleep to its correct place in society, which I guess is the part that gave me most food for thought. As a manager of people, maybe one day as a parent, how can I create an environment where getting enough sleep is a priority?

I’m not going to give some sort of rating or anything. I enjoyed the book, it carries an important message, and I think more people should read it.

GPU-accelerated Photo processing

Seems a bit passé right? Well apparently not. Apparently using the GPU to do all the heavy lifting isn’t something that the popular photography applications do. Lightroom and Photoshop have some GPU acceleration, but apparently Lightroom doesn’t really benefit unless you have a big display (I guess they do all their rendering on a smaller view of the picture)[1].

On Linux the use of the GPU in the most popular photo suites is spotty at best. RawTherapee doesn’t seem to do it at all, and Darktable’s is flaky. In fairness to Darktable, they do support OpenCL in some circumstances [2], but if it doesn’t work on the GPUs I have (2x Radeon R9 290x), which are now fairly old and have reasonably good driver support, you have to question whether it is worth it. In fact, even on my laptop, which has a Broadwell (ie 5th generation intel Core) series CPU in it, Darktable didn’t want to work with OpenCL.

What I’m wondering is, why OpenCL? OpenGL has a shader pipeline that is pretty much purpose built for processing 2D images, and for the slightly more complex tasks (like generating the histogram) it has had compute shaders since v4.3. Also, it has much better driver support than OpenCL, especially on linux, thanks to Valve’s work on SteamOS.

So, over the Christmas break I started working on an OpenGL-based, GPU-first photo editor (I’m calling it Monet). I didn’t get very far, and haven’t done much since, but I did just recently get the Demosaic operation to work correctly (Read more about that here [3], I based what I did on this article). So, I did a fairly non-scientific comparison of a very simple action: resizing the window, and monitored the CPU and memory usage while I was at it.

CPU/Memory comparison - Darktable vs Monet vs Idle
CPU/Memory comparison – Darktable vs Monet vs Idle

My method was simple: open both programs, and let them idle long enough that all CPUs and memory were mostly stable. Then resize each program’s window up and down by dragging the resize handle in and out so that the image goes from about 2cm wide to about 15cm wide and back, at a frequency of around 2Hz, for about 10 seconds. You can see from the image that Darktable had to work the CPU (in fact all of them) quite hard do do this. Monet is barely distinguishable from idle. There is a small increase in memory usage while resizing Darktable, and none for Monet. Note that in Darktable I turned off all the processing modules except Demosaic and White-balance, because that’s all Monet can do right now.

This has, at least, motivated me to push on with this project, because there are some real gains to be had.

Monet is Open Source, and available at github: https://github.com/guysherman/monet

References:

  1. Adobe Community Forums: “What graphics card for Lightroom in July 2017”.  https://forums.adobe.com/thread/2354015
  2. Darktable: “Darktable and OpenCL (updated)”.  https://www.darktable.org/2012/03/darktable-and-opencl
  3. Max Smolens: “COMP 238: Advanced Image Generation: Assignment 4: Programmable Shading”.  http://msmolens.github.io/bayer-pattern-demosaicing/

Pro Tip: Batch-remove the audio from a set of video files

I recently wanted to remove the audio from a bunch of B-Roll I was giving to someone else, because lets face it, I’m always talking about something ultra-nerdy in the background.

In my case they were the MOV files straight out of the camera, and I wanted them put into a directory called `no-audio`. On linux or mac, you can do this:

for i in *.MOV; do ffmpeg -i "$i" -vcodec copy -an "no-audio/${i%.*}.MOV"; done

We make use of the ability to write small shell scripts on one line, as well as bash Parameter Expansion to allow us to form the new name out of the old name.

Thanks to the following Stack Overflow posts:
https://stackoverflow.com/a/33766147/1255970
https://superuser.com/a/484860

Guitar exercise generator

One of my hobbies is playing guitar, but unlike when I was a child learning the Cornet, I have never had lessons on guitar, I have learned what I know from books, and the internet, and just fiddling around. Most recently I’ve been putting a lot of effort into scales, but I’ve learned them as patterns on the fret-board instead of being able to play them off sheet music. I think both approaches are necessary (ie, knowing scales from memory, and being able to navigate them without thinking, and being able to read from sheet music). I have found that simply reading along with standard runs up and down scales that I already know doesn’t really help – I find the starting note and my muscle memory takes it from there. So, I’ve written a small program to help.

Continue reading “Guitar exercise generator”

Building Ardour on arch

UPDATE: It turns out if you want LV2 GUIs to show up you need to install the suil package, it is an optional dependency, but well worth it!

NB: I haven’t written this post with the intent of showing every-day users how to build Ardour on Arch, if you just want to use it, go make a donation or subscribe over at Ardour.org, and support the development of a great application.

I switched to Arch Linux recently, because I was having major issues with Ubuntu and Debian. Build Ardour on Ubuntu/Debian is really easy because of the apt-get build-dep command, which (if a package publishes the data) pulls down all the build-time dependencies of a package.

Arch doesn’t have that, but as I found out Arch also packages headers and static libs in with the normal packages (no separate -dev packages to install), so it somewhat evens out.

Step 1 – Dependencies

I used the package list from my post https://guysherman.com/2015/08/16/building-ardour-on-windows-with-msys2/ as the basis for this post because MSYS uses pacman, and has similar naming conventions for packages.

So, first step (assuming you’ve got a working Arch Install with Jack set up and working).

> sudo pacman -S glib2 gobject-introspection c-ares \
libidn rtmpdump gnutls libutil-linux gtk-doc \
docbook-xsl intltool libjpeg-turbo jbigkit \
pkg-config ladspa icu boost curl fftw libusb \
libxml2 libogg flac libvorbis libsndfile \
libsamplerate soundtouch pcre cppunit taglib \
gnome-doc-utils gnome-common atk libpng harfbuzz \
cairo fontconfig freetype2 pixman pango jasper \
libtiff gdk-pixbuf2 shared-mime-info gtk2 \
libsigc++ cairomm atkmm pangomm gtkmm liblo \
serd sord sratom lilv aubio portaudio \
jack2 libltc rubberband soundtouch liblrdf cppunit suil

Step 2 – Get the code

Assuming you have already changed to the directory where you want to clone ardour

> git clone git://git.ardour.org/ardour/ardour.git

Alternatively you could go to their github mirror and fork that, and then clone that to your machine. If you want to submit changes doing them via github PRs is by far the easiest way.

Step 3 – Build

Next change into the ardour directory that was cloned

> cd ardour

My arch system had Python 3 setup as python and the version of waf that Ardour uses doesn’t seem to like python 3, so I had to run it with python2:

> python2 waf configure
> python2 waf

If you are missing any depenencies then you should find out during the waf configure step.

Step 4 – Run

To run the version you just built

> cd gtk2_ardour
> ./ardev

Waf also lets you do install/uninstall/clean etc.

 

Update on my C++ development environment

A while back I posted this post:

https://guysherman.com/2015/08/22/towards-a-cross-platform-cc-dev-environment/

At the time I was using GCC as my compiler, so I was using:
linter-gcc and autocomplete-clang. I found the GCC linter a bit flakey, and to be honest, GCC’s error messages are just the worst. So, I decided to move to Clang as my compiler and have switched to linter-clang. One of the nice things here is I now only need the .clang_complete file to supply options to Clang, which makes life a little simpler. The whole thing seems a great deal more reliable now. I also found out that the directives in the .clang_complete file need to be on separate lines.

Also, at some point the atom-build plugin changed so that I need to put fully-qualified paths into all my build commands, there is a way to put variables in there, so here’s what I have now:

{
    "cmd": "{PROJECT_PATH}/waf",
    "name": "Build",
    "sh": "true",
    "cwd": "{PROJECT_PATH}",
    "targets": {
        "Clean" : {
            "cmd": "{PROJECT_PATH}/waf clean",
            "sh": "true"
        }
    //...
    }
}

I’ve dispensed with the atom-debugger plugin, because it wasn’t mature enough, plus I feel hardcore using a command-line debugger!

A couple of other plugins I use are:

  • atom-snippets to have a preamble template when I create new files
  • open-terminal-here to make it quick to open a terminal window to do things like interacting with git, and debugging.

Finally, I found a neat theme which is inspired by Google’s Material Design design-language, you need two packages for it:

  • atom-material-ui
  • atom-material-syntax

 

So, this brings the package list to:

  • atom-snippets
  • autocomplete-clang
  • autocomplete-plus
  • build
  • linter
  • linter-clang
  • open-terminal-here
  • switch-header-source
  • atom-material-ui
  • atom-material-syntax