Towards a cross-platform C/C++ dev environment

As you’ve probably read in my recent posts, I’ve been getting involved in the linux audio community. I’ve discovered that I really like linux – especially the concept of a system-wide package manager, I wish Windows had one that everybody used.

What I miss is Visual Studio. I really can’t overstate the hit my velocity has taken without intellisense. Its not that I’m incapable of programming without it, but it takes such a long time to look through html SDK/API docs. So, I’ve been trying to sort out a system where I can lint my code, build my projects, get some form of auto-complete, and have a few other C/C++ specific conveniences, like switching between .h* and .c* files with the same base name. I also don’t want to be tied to a build system. I’m kinda greedy, I want a graphical debugger if I can. I’ve got a solution now which shows potential.

As text editors go, I’ve grown to like Atom by GitHub over the last year or so. It is built in JavaScript (mostly, they moved a few things down to native code for performance, it is now quite slick, and a lot better than it was even 6 months ago), and it is super extensible, but in a nice way, not like Eclipse.

Triggering builds

One of the challenges with Linux is that there are quite a few build systems in common use: autotools, scons, waf, cmake to name a few. I like waf, but autotools is super common. Anyway, I didn’t want to be tied to a build system, but I do want to do basic things like build or configure my project from a keyboard shortcut. There’s a nice plugin for this: build. It has a bunch of built-in build systems that are mainly web-focused, but it lets you basically tie any shell command to the keyboard shortcut if you want, by providing a file called .atom-build.json.

Here’s what mine looks like:

{
     "cmd": "./waf",
     "name": "Build",
     "sh": "true",
     "targets" : [
         {
               "cmd": "./waf configure",
               "name": "Configure",
               "sh": "true"
         }
   ]
}

It’s not perfect – if you switch target it remembers which is a pain for doing things like configure, or clean, which you do once in a while, then go back to building. I’ll probably fork it and make a more C/C++ centric version one day, but it works well enough. Install it as follows:

> apm install build
Linting

Ok, so I can build, and each time I build, I get to see what’s wrong but sometimes deciphering a big blob of compiler output is difficult, and slow. Atom has a plugin which gives a generic linting infrastructure. I guess it was originally built for JSLint or something. Users can create providers to this linter to handle various languages, and it just so happens there is one which uses GCC, and one which uses Clang, so take your pick. I’m using the GCC one.

> apm install linter linter-gcc

You can change the default compiler flags, so I added -Werror, to make warnings appear as errors – I use that in my build script as well. linter-gcc supports a per-project config file which allows you to configure the settings per project, called .gcc-flags.json. I haven’t had to add any flags to it as yet, but at some point I’ll write a waf plugin to dump my CFLAGS and CXXFLAGS into this file whenever I configure. If you use autotools, I’m sure your configure script could do something like that too. A note about this one: at the time of writing (version 0.36.0), this plugin is actually a little broken. I fixed it and the PR was merged, so you should probably get it from source instead, or install it with apm and then replace the main.js with one from github.

Auto-complete

Again, Atom has existing auto-complete infrastructure in the form of the autocomplete-plus plugin, which can be extended by providers – in this case autocomplete-clang. Now this is one of the key reasons LLVM and Clang were created, to be used in tool integrations. You’ll need to have Clang installed such that the clang executable is on your path, but that’s easy on all platforms. Install the plugins with:

> apm install autocomplete-plus autocomplete-clang

You can customize your cflags etc per project with a .clang_complete file, which is the same file used by the clang-complete plugin for VIM, which is neat. Again, I haven’t had to modify these yet, but getting waf to auto-generate the file should be pretty easy.

Switching files

Ok, so we can build, lint, and auto-complete now. All that’s left from my wish-list is switching between header and source files with the same name, and debugging. Lets deal with the former, there’s a plugin for that.

> apm install switch-header-source
Debugging

This one is probably the least satisfying. I’ve installed a plugin called atom-debugger, which basically bootstraps gdb. At the moment it is very basic, but I think it could be improved quite quickly, so I’ll probably find myself contributing to it. Install it with:

> apm install atom-debugger
Conclusion

Linux is actually a really awesome environment to write code in, mainly because of system wide package management, and a defined scheme for where libraries, headers etc go, which means that you very seldom have to add special include or lib directory paths to your compiler flags. Visual Studio is an incredibly powerful IDE, which you miss when you are relegated to a mere text editor. Atom, is a great, highly extensible text editor which can be extended to give you quite a functional IDE for developing in C/C++, without being tied to a buildsystem or compiler, and it can be done on Linux, OSX and Windows. Boom.

 

Building Ardour on Windows with MSYS2

NOTE: This post was written for Ardour 4. There is at least one more dependency (liblua, possibly more). Since I wrote this article, I’ve mostly shifted my development environment to linux, hence I haven’t kept this post up to date. So, caveat emptor, YMMV, and other suitable disclaimers. If you just want to use Ardour on Windows, it is now supported on Windows, so why not donate to the Ardour developers and get an official download? I’m currently toying with building Ardour for Windows via the Windows Subsystem for Linux on Windows 10, but a post on that is a bit far away.

Building Ardour on Linux is quick and easy. Building Ardour on Windows…isn’t. That being said, I bashed my way through it, and have come up with a strategy that will get you up and running.

Who is this post for?

This post is most definitely for developers, familiarity with MSYS2 is handy, but as I was basically learning how MSYS2 works as I went, this post won’t assume terribly much on that score. If you just want to use Ardour on Windows I highly recommend downloading a nightly build and using that instead. Keep in mind that Ardour is un-ashamedly built for linux, and it is fantastic on that platform, so if you want the most out of Ardour, use it on linux. If you MUST use windows, know that at present it isn’t supported, and sometimes the nightly builds might be broken. That being said the Ardour developer community is looking into how support could become possible one day.

Other things you might want installed

I find it is handy to have JACK and ASIO4ALL installed.

Getting Started

The first thing you’ll need to do is install MSYS2, but what is MSYS2? Google says this:

MSYS2 is a successor of MSYS and MinGW-builds. 
MSYS2-shell uses “pacman” for downloading packages from
repo, and these are GPG signed & verified. Packages are
by developer: lexx83 (Alexpux), mingwandroid, niXman.

Essentially it is a very light gnu-like environment for Windows, like a very stripped down cygwin. It makes it a lot easier to build linux programs and libraries for windows that don’t use Cmake. There is a good set of instructions here on how to install it and set it up, so I won’t copy them.

Great, now that you’ve followed all the way through that, you should have an up-to-date msys2 system. Now we have a few packages that need to be installed, well, quite a few, most of them from source, but we’ll do it in four batches: tools, prebuilt dependencies, source dependencies, and then some more pre-built packages which require some of our source packages. Before that we should discuss the three shells available to you:

  • MSYS2 shell
  • MinGW-w64 Win64 shell
  • MinGW-w64 Win32 shell

Essentially, the MSYS2 shell is a runtime environment with full posix support. This is most useful for the unix tools. The other two shells have less posix compliance, but play much nicer with the windows APIs, so if you want to build a windows application, you’ll use these. Of these latter two, I’m only using the MinGW-w64 Win64 shell, which uses a 64bit toolchain. That being said, it is easiest of you do all of the package installation (both pre-built, and from source) in the MSYS2 shell. There’s a bunch more detail here.

Install various tools
> pacman -S mingw-w64-x86_64-toolchain python3 \
python3-setuptools mingw-w64-x86_64-python3 \
mingw-w64-x86_64-python3-setuptools python2 \
python2-setuptools pkg-config \
mingw-w64-x86_64-pkg-config autoconf automake perl \
gtk-doc flex bison patch libtool \
mingw-w64-x86_64-libtool wget git nasm \
mingw-w64-x86_64-nasm dos2unix mingw-w64-x86_64-cmake

This will take a while based on your internet connection, etc.

Install pre-built dependencies
> pacman -S mingw-w64-x86_64-glib2 \
mingw-w64-x86_64-gobject-introspection \
mingw-w64-x86_64-pkg-config mingw-w64-x86_64-c-ares \
mingw-w64-x86_64-ca-certificates \
mingw-w64-x86_64-gnutls mingw-w64-x86_64-libidn \
mingw-w64-x86_64-libssh2 mingw-w64-x86_64-rtmpdump \
mingw-w64-x86_64-gnutls libgnutls-devel \
libutil-linux-devel gtk-doc \
mingw-w64_x86_64-docbook-xsl intltool \
mingw-w64-x86_64-libjpeg-turbo \
mingw-w64-x86_64-jbigkit \
mingw-w64-x86_64-ladspa-sdk

Build and install source dependencies

So the thing about MSYS2 is that it stays very “bleeding edge”, whereas most of the Ardour devs use Debian, which has quite a long update cycle. Both approaches have their merit, but they come into conflict when you want to build something that expects debian-ish packages, but have only the most up-to-date packages available. Thankfully, others have blazed this trail before me and have created repositories of package build scripts for most of the packages you’ll need for Ardour. Most of these are quite straight-forward to install, but there are some mighty strange quirks, which we’ll discuss as we get to them.

To get these package scripts, clone my github fork of them:

> git clone https://github.com/guysherman/MINGW-packages

NB: Make sure to switch to the guypkgs branch!

As the README says, to make a given package do the following:

> cd ${package-name}
> makepkg-mingw

And to install it do:

> pacman -U ${package-name}*.pkg.tar.xz

Alternatively, you can do this to build and install at once:

> cd ${package-name}
> makepkg-mingw --install

There are also a few useful flags that you’ll need at times:

--nocheck - disables the 'check' step, which is usually
            unit tests
--nodeps - disables dependency checking, which is 
           useful because a few packages have cyclic 
           runtime dependencies
--skippgpcheck - doesn't check the signature of the 
                 package, I needed this on a couple of
                 packages, only use it if you have 
                 errors relating to unknow public keys,
                 or the pgp part of the process.

So, now that we’ve covered how to build and install the packages, build and install the following list of packages:

 mingw-w64-x86_64-icu
 mingw-w64-x86_64-boost
 mingw-w64-x86_64-curl
 mingw-w64-x86_64-fftw
 mingw-w64-x86_64-libusb
 mingw-w64-x86_64-libxml2
 mingw-w64-x86_64-libogg
 mingw-w64-x86_64-libflac
 mingw-w64-x86_64-libvorbis
 mingw-w64-x86_64-libsndfile
 mingw-w64-x86_64-libsamplerate
 mingw-w64-x86_64-soundtouch
 mingw-w64-x86_64-wineditline
 mingw-w64-x86_64-pcre with --nocheck
 mingw-w64-x86_64-cppunit
 mingw-w64-x86_64-taglib
 mingw-w64-x86_64-dlfcn
 mingw-w64-x86_64-gobject-introspection with --nodeps
 mingw-w64-x86_64-gnome-doc-utils
 mingw-w64-x86_64-gtk-doc
 mingw-w64-x86_64-gnome-common
 mingw-w64-x86_64-atk
 mingw-w64-x86_64-libpng

These should all be straight forward, but some of the will take a long time. Next we get into the weirdest quirk I’ve seen in a while. There are two libraries, harfbuzz and freetype. Turns out they have a circular dependency on eachother. So what you have to do is

First, build and install harfbuzz with –nodeps

> cd mingw-w64-x86_64-harfbuzz
> makepkg-mingw --nodeps --install

This will end up installing the following packages from the pre-built repositories, but the freetype will have no harfbuzz support:

mingw-w64-x86_64-cairo
mingw-w64-x86_64-fontconfig
mingw-w64-x86_64-freetype
mingw-w64-x86_64-pixman

So, next step delete the harfbuzz package that you already built and rebuild it (assuming your still in the harfbuzz directory):

> rm -rf mingw-w64-x86_64-harfbuzz*.pkg.tar.gz
> makepkg-mingw --nodeps --install

Next build and install the following packages:

mingw-w64-x86_64-freetype
mingw-w64-x86_64-fontconfig
mingw-w64-x86_64-pixman

Ok, now that’s the harfbuzz freetype weirdness out of the way. You should be good to go ahead and install the rest of the source packages:

 mingw-w64-x86_64-pango
 mingw-w64-x86_64-libjpeg-turbo
 mingw-w64-x86_64-jasper
 mingw-w64-x86_64-libtiff
 mingw-w64-x86_64-gdk-pixbuf2 
 mingw-w64-x86_64-shared-mime-info
 mingw-w64-x86_64-gtk2 
 mingw-w64-x86_64-libsigc++
 mingw-w64-x86_64-cairomm
 mingw-w64-x86_64-glibmm
 mingw-w64-x86_64-atkmm
 mingw-w64-x86_64-pangomm
 mingw-w64-x86_64-gtkmm
 mingw-w64-liblo
 mingw-w64-serd
 mingw-w64-sord
 mingw-w64-lv2
 mingw-w64-sratom
 mingw-w64-lilv
 mingw-w64-aubio
 mingw-w64-portaudio
 mingw-w64-jack
 mingw-w64-libltc

The second wave of pre-built packages

Ok, so now there are two more pre-built packages that you need to install:

> pacman -S mingw-w64-x86_64-rubberband \
mingw-w64-x86_64-vamp-plugin-sdk
Actually Building Ardour

Ok, so now you have all the dependencies you need to build and run Ardour (although we’ll discuss one other work around a bit later which solved a problem that I had, that I’m not sure if you will encounter or not).

For now you’ll need to clone my fork of Ardour. I’ll update when my pull request is accepted. Get the ardour code (do this from somewhere you’d like the code to go):

> git clone https://github.com/guysherman/ardour.git
> git clone git://git.ardour.org/ardour/ardour.git
> cd ardour

Ardour uses the waf build system. I found there was a tiny quirk with running it under MSYS. Run it as follows to configure Ardour:

> MSYSTEM=''
> ./waf configure \
  --dist-target=mingw \
  --prefix=/mingw64 \
  --configdir=/share

Then to build it, run

> ./waf

If you want to install it you can do so with

> ./waf install

I’m more interested in running it:

> cd gtk2_ardour
> ./ardev-win

Now, you might find that you get white text on white buttons, like I did, and you’ll probably see warnings about the gtk ‘clearlooks’ engine. I worked out a weird work around for this.

Weird clearlooks workaround

Go to a folder somewhere (ie ~/), and get the gtk-engines source code:

> cd ~/
> wget http://ftp.gnome.org/pub/GNOME/sources/gtk-engines/2.20/gtk-engines-2.20.2.tar.gz
> cd gtk-engines-2.20.2
> ./configure \
  --build=x86_64-w64-mingw32 \
  --host=x86_64-w64-mingw32 \
  --prefix=/mingw64
> make
> make install

You’ll get a whole lot of warnings that it can’t find various dlls. That’s no biggie in this case.

Next you’ll want to download a nightly build of ardour for windows and install it.

Then copy C:\Program Files\Ardour4\lib\gtk-2.0\engines\libclearlooks.la to C:\msys64\mingw64\lib\gtk-2.0\2.10.0\engines. It’ll want to over-write a file, which is all good. Now if you go back and run Ardour, it should look fine.

UPDATE: ACKNOWLEDGEMENTS

It occurred to me after I wrote this that I should have acknowledged all the help the Ardour devs gave me while I was asking pesky questions. They are a very helpful and welcoming group.

Building Ardour on Ubuntu

[UPDATE] I’ve found that the instructions below cause jackd2 to be uninstalled, which causes some problems. Reinstalling the jackd2, libjack-jackd2-0, libjack-jackd2-dev and pulseaudio-module-jack packages should fix those issues[/UPDATE]

I’m taking a journey into audio software, and I’ve been playing around on Linux because there’s quite a good open-source audio community out there. The shining star (in my opinion) is Ardour, a really excellent, open-source DAW. In fact Harrison Mixbus is based on it.

Anyway, many of the developers who work on Ardour use Debian, but I prefer UbuntuStudio, so here are some simple steps to building on Ubuntu. I’m assuming that you already have a dev environment set up (ie you’ve gone ahead and installed build-essentials, git etc).

These instructions are pulled largely from the Ardour instructions, with a few other useful tips given to me by Robin Gareus and Paul Davis in IRC.

I’m running Ubuntu Studio 15.04.

Step 1 – The dependencies

The Ardour website has a list of Ardour’s dependencies. You’ll notice that there are a few libs which they have modified versions of. You don’t need these special ones – you can get by without them. I learned yesterday that APT has very neat mechanism for installing these dependencies. A package maintainer can specify the list of dependencies required to build the package, not just to install it. So, to install the dependencies do the following:

> sudo apt-get build-dep ardour3

On Debian, that would actually give you all the dependencies for Ardour 4 (their Ardour3 package is actually version 4.x). On Ubuntu there are a couple more dependencies you need to install:

> sudo apt-get install vamp-plugin-sdk libtag1-dev libaubio-dev liblrdf0-dev librubberband-dev

Now you should have all the dependencies for Ardour.

Step 2 – Get the code

Assuming you have already changed to the directory where you want to clone ardour

> git clone git://git.ardour.org/ardour/ardour.git

Alternatively you could go to their github mirror and fork that, and then clone that to your machine. If you want to submit changes doing them via github PRs is by far the easiest way.

Step 3 – Build

Next change into the ardour directory that was cloned

> cd ardour

Then we build

> ./waf configure
> ./waf

If you are missing any depenencies then you should find out during the waf configure step.

Step 4 – Run

To run the version you just built

> cd gtk2_ardour
> ./ardev

Waf also lets you do install/uninstall/clean etc.

Linux Audio

I’ve recently started learning about signal processing, and a programming language called faust. They are very linux focused, and so I figured the best way to get up and running with it would be to dive into linux, it’s been a while since I really gave linux a fair look, so I was due for it.

Without digressing too far, I would like to announce that I now quite like linux, and I could pretty much use if for all of my home computing apart from gaming.

Anyhow I’m running Ubuntu Studio which is an audio-focused Ubuntu derivative, and there was one little snag Continue reading “Linux Audio”

The Importance of Time

TL;DR

If you are getting errors about certificate revokation, SmartScreen, Windows Defender definitions, or logging into your live account, there’s a good chance either your BIOS clock, your Windows system time, or both are incorrect. Changing your BIOS clock will probably require you to also change your Windows system time. I found that getting both of these in sync, and correct solved my problems. Read on for a bit more detail.

/TL;DR.

I recently replaced the motherboard in my PC, because the previous one had died. It was a warranty replacement so I didn’t have to worry about drivers etc, but I did have to do a Windows Activation over the phone because the MAC address and a few other details had changed.  What I didn’t pay close attention to was the time in the BIOS (well, UEFI is more accurate, but whatever), and later the time in Windows itself.

I noticed a few wierd issues with my PC:

  • Every time I ran a downloaded exe, I got an error saying “Windows SmartScreen can’t be reached right now”
  • Windows defender was refusing to download updates. In fact, at any given time it would say that I had installed tomorrow’s update, today.
  • Visual Studio was refusing to renew my Windows 8 App Developer License
  • I couldn’t log into Skype with my Windows Live account
  • I was getting an SSL warning in Chrome saying “unable to check whether the certificate has been revoked”
  • I was getting a similar error through IE (or applications that use it under the hood for talking to the internet) saying “Revocation information for the security certificate for this site is unavailable”

A bunch of googling lead me to look in my BIOS and see what time the hardware clock was reporting. It was wrong, I forget exactly how wrong it was but it was at least 12 hours if my memory serves correctly. After booting back up, I found that Windows Defender was now happily installing updates, and I *think* SmartScreen may have started to behave, but the other issues still presented themselves.

I left it there because I had work I needed to do so I want to the office and used my Windows machine there. Last night I did a bit more googling, which led me to double check my System time in Windows (which had been correct initially). Turns out that after changing the clock in the BIOS, the system time had gone all wack. The time was ok but the date had taken me all the way out to the 26th of December. “How rude!” I thought, “Trying to cheat me out of Christmas!”. Anyhow, I set the clock to the correct date and time, and now it is all good.

Why does this happen? My supposition is that a bunch of the cryptography involved is salted with the current time (I’m pretty sure that this is the case for using Windows Live accounts as they are no doubt based on the same Kerberos implementation used in Active Directory, and Kerberos is very time sensitive).

So, hopefully you’ve come to this page via google because you’re struggling to solve one or more of the issues above. If that is the case, I hope this post helps solve your problems.

Guy’s GDC Roundup

Today I’m sitting in Santa Cruz, having spent an action-packed week in downtown San Francisco at Game Developers’ Conference 2014. GDC is an event I’ve wanted to get along to for many years, and I have followed it closely (albeit vicariously) through twitter and the blogosphere for the past few years. I really wasn’t prepared for what a massive event it was. In every time-slot there were at least 3 sessions that I really wanted to see, and usually a couple more that I would be interested in seeing. Luckily, my excellent friend and boss Danu got me an “All Access” pass, which includes 12 months access to the “GDC Vault”, which has recordings of [pretty much] all the sessions. I’ll summarise a few of the main things I took away from the conference.

High-Quality Mobile Graphics

I went to several talks about high-quality graphics, from Imagination Technologies (ImgTec), ARM, Intel, Unity and Epic. They all had different names for what I’m simply calling High-Quality graphics; some talked about “Console Quality”, some talked about “AAA Quality”, some talked about “PC” quality, but the long and the short of it is that mobile SOC’s now have enough compute, and enough memory bandwidth to do graphics at least as good as the last generation of consoles (Xbox360 and PS3), if not better. I learned that you have to be a lot more cunning than on PC, though, because mobile GPU implementations differ quite wildly, so you have to be prepared to have multiple render paths in your engine, and you have to think carefully about what your lowest-common denominator.

Things to watch: ARM announced some GLES 3 extensions which both they and ImgTec support, which should make deferred shading a great deal easier on mobile: one of them is the ability to sample the backbuffer in a shader, and the other is called “Pixel Local Storage” – which allows you to store extra data along with the colour/depth/stencil data in the backbuffer. Essentially, global data, in the same block of memory as the backbuffer, that persistes from draw-call to draw call. You could thus use the backbuffer as your G-Buffer for deferred shading. In addition to this ImgTec announced that they are releasing a new GPU based on their 6 series design, which has ray-tracing hardware in it. The idea would be to use rays for things like shadows and environment maps. It was quite convincing, but if only they have it, it goes against the whole “lowest common denominator” idea. I really hope other vendors pick this up.

Remember that the GPU is Asynchronous

I attended a talk on how to find, fix and avoid “GPU Sync points”, which are places where you make your application code wait for the GPU to do something, causing an enormous pipeline bubble. I attended a talk on how the Nitrous engine from Oxide Games works with AMD’s Mantle, as well as two talks on DirectX 12. Both Mantle, and DirectX 12 bring the application a giant step closer to the GPU, throwing away things like state-tracking in the driver, and the multi-layer App/Client/Server/Driver architectures that we have today. The biggest key from all of these talks was that you have to remember that the GPU is quite far away from the CPU (not quite so far for intel, but miles away for a discrete GPU), and as such you should treat it as an asynchronous resource. Queue work up, and let it go.

Things to watch: Both Mantle and DirectX 12 allow the application to create resources and queue work on the GPU from whatever thread they want. I had some conversations (with some people who will remain nameless) that lead me to believe OpenGL (at least OpenGL ES) will head in this direction in the near future. If you are building engines, you better be building them to be task based. If you’re using other people’s engines, you better demand that they are building them task-based, and making them scale well with multiple cores. Dan Baker from Oxide Games mentioned in two separate talks that their task-based architecture, when unleashed with Mantle or DX12, took them from being CPU bound in DX11 to [sometimes] being GPU bound. The powers-that-be are about to give us the ability to actually use our GPUs, make sure you’re ready to do so.

We’ve got to fix this monetization thing

For all this tech talk, I felt like there was a bit of a looming shadow at the conference: despite the ease with which you can distribute a good game, it is really hard to get your good game noticed, because the market is awash with games that have been ruined because the designers built the thing around fleecing the customers. Something like 60% of the show floor was advertising and payment processing providers. There was a talk on “how to monetize teens”. I attended the rant session, and there was a slide with a count-down, and an “IAP” that could get us to the next slide; eventually two guys got up and paid some money to the speaker, and he asked the crowd to “give the Whales a round of applause” – all of this, obviously, was to make a point. Later in that session, we were reminded that the terms “Minnow” and “Whale” come from the slot-machine industry, and we were challenged about the morality of viewing our audience in the same way. Then I found out that Firaxis/2K/Take-Two brought X-Com Enemy Unknown to mobile for $25 (NZD) and that it is selling like wild-fire and driving console and PC sales! I reminisced with others about the good old days of Doom, and Wolfenstein, and Commander Keen, and other games which we all got the first few levels for as shareware, and then bought the full version later. By the end of the week, I came to this conclusion: we (the industry) need to stop making slot machines for children, and start making fun games, that people want to play, and want to play again, and then work out how to avoid giving it all away for free. Concentrate on making awesome games, stop working out how to fleece your players.

Things to watch There was a massive room full of people present at the rant session, I can only hope a bunch of them took on board the idea of making good games, rather than designing mechanics to fleece their customers.

Conclusion

GDC was awesome. Mobile tech has got awesome. We need to make awesome games.

My Ultimate Cygwin Setup

2017 UPDATE:

This post is quite out of date now has been updated thanks to excellent commenters. If you want a unix console on Windows 10, you should go ahead and install Ubuntu from the Windows Store. I think there are other flavours coming soon as well. Read more about it here: https://blogs.msdn.microsoft.com/commandline/2017/05/11/new-distros-coming-to-bashwsl-via-windows-store/

If you don’t have Windows 10, Jon L provided the updated details on how to get apt-cyg in the comments, and I’ve ammended this post to include that detail.

 

I sat down today to do some programming, and I got a little bit distracted improving my environment, but I think that where I’ve got to is quite good, so I’ll share it with you. So, what are my requirements?

  • a unix-like console with
  • an easy mechanism for installing packages
  • git, python, and other useful things
  • summary information about my git repos, at the command prompt
  • auto-completion for git
  • shortcuts to frequently used folders, and some other conveniences

A unix-like console on Windows

Have you heard of Cygwin? Cygwin is great. Cygwin is unix on Windows. Sometimes I think Microsoft should just buy Cygwin, and make it a first-class citizen of Windows. Installing Cygwin is easy, but the base install doesn’t come with many packages; you can re-run the installer to add more, but if you’re like me and you don’t save your downloads to disk, it can be a bit of a pain. Get the installer here. Don’t finish the installation just yet because…

An easy mechanism for installing packages

You’ll want to install a couple of extra packages required to install apt-cyg. I learned how to install it here. You’ll want to install cygwin with the following extra packages, to make sure you can run apt-cyg:

  • wget
  • tar
  • bzip2
  • subversion
  • vim

Once Cygwin is up and running do the following:

$ svn --force export http://apt-cyg.googlecode.com/svn/trunk/ /bin/
$ wget rawgit.com/transcode-open/apt-cyg/master/apt-cyg -P /bin/
$ chmod +x /bin/apt-cyg

If you’re running the x86_64 version of Cgywin (which I recommend if you’re on 64-bit Windows), then you’ll also want to open up /bin/apt-cyg in a text editor:

$ vim /bin/apt-cyg

and change the following two lines

98:  wget -N $mirror/setup.bz2 to wget -N $mirror/x86_64/setup.bz2
105: wget -N $mirror/setup.ini to wget -N $mirror/x86_64/setup.ini

It looks like they’ve made it deal with multiple architectures now too.

Now you’re ready to install some more useful tools!

Git, Python and other useful things

bash-completion – auto-complete for bash
ca-certificates
– allows your Cygwin environment to validate SSL certificates
curl – a useful command-line tool for accessing urls, similar to wget, but more powerful
git – a distributed version control tool
git-svn – lets git play nice with SVN
python – an interpreted programming language. I like to write shell scripts with it
python-setuptools – people use this when distributing their python stuff
rsync – handy tool for synchronizing stuff from one place to another (especially over the internet)

You should be able to install all of these with apt-cyg, it’ll handle all the dependencies:

$ apt-cyg install bash-completion$ apt-cyg install ca-certificates
$ apt-cyg install curl
$ apt-cyg install git
$ apt-cyg install git-svn
$ apt-cyg install python
$ apt-cyg install python-setuptools
$ apt-cyg install rsync

[Update]
I forgot to mention ncurses, a library for writing text-base interfaces. I use it to create cls (see my update lower down).

$ apt-cyg install ncurses

Summary information about my git repos, at the command prompt

I have been using posh-git inside Windows Powershell, The thing I like about posh-git is that it gives you an overview of your repository status at your command prompt, like so:

D:\Users\Guy\Programming\git\bitlyWin8 [master +5 ~9 -0 | +0 ~0 -6]>

Telling you which branch you are on, and giving you a summary of both tracked, and un-tracked changes. However, this can make my prompt quite slow in large repositories, because it takes quite a while to run the script which generates it, but I still want something like it in cygwin. What I have found, although less detailed than posh-git, is nice and quick, and it gives me a simple indicator of what I need to do with my repository. There is a script called git-prompt.sh which can do some nice things with your prompt, so let’s go ahead and get that. We’ll be adding a few shell scripts that get run when you open a cygwin terminal window, and I like to keep these in a bin folder inside my home folder. We’ll download the shell scripts to there:

$ cd ~
$ mkdir bin
$ cd bin
$ wget 
https://raw.github.com/git/git/master/contrib/completion/git-prompt.sh

Now edit your .bash_profile to add git command prompting to your bash sessions. The top of the git-prompt.sh file has a good explanation on the options, which are controled by environment variables.

source ~/bin/git-prompt.sh

# Show if there are unstaged (*) and/or staged (+) changes
export GIT_PS1_SHOWDIRTYSTATE=1

# Show if there is anything stashed ($)
export GIT_PS1_SHOWSTASHSTATE=1

# Show if there are untracked files (%)
export GIT_PS1_SHOWUNTRACKEDFILES=1

# Show how we're tracking relative to upstream
export GIT_PS1_SHOWUPSTREAM="verbose"

# Save the old command prompt, and set the new one to  <number of commands issued> user:directory (branch <status symbols>)$ eg: 167 Guy:Arduino (master u=)
PS1_OLD=${PS1}
export PS1='\[33[1;34m\]\!\[33[0m\] \[33[1;35m\]\u\[33[0m\]:\[33[1;35m\]\W\[33[0m\] \[33[1;92m\]$(__git_ps1 "(%s)")\[33[0m\]$ '
export PS1='\[33[1;34m\]\!\[33[0m\] \[33[1;35m\]\u\[33[0m\]:\[33[1;35m\]\W\[33[0m\] \[33[1;92m\]$(__git_ps1 "(%s)")\[33[0m\]\$ '

Auto completion for git

We already installed bash-completion and now all we need to do is add a script that supplies completion functions for git. To download the scritpt:

cd ~/bin
curl https://github.com/git/git/raw/master/contrib/completion/git-completion.bash -OL

And then to add it to your .bash_profile

source ~/bin/git-completion.bash

Shortcuts to frequently used folders, and some other conveniences

Cygwin maps your Windows filesystem to /cygdrive/<drive letter>/ but this can be a bit tedious to get to, so I’ve created some shortcuts. You might like to break these out into a separate file if you end up with heaps of them, but I’ve only got a few for now. Open up your .bash_profile and add the following things:

alias cdp='cd /cygdrive/d/Users/Guy/Programming'
alias cda='cd /cygdrive/d/Users/Guy/Documents/Arduino'
alias cdc='cd /cygdrive/c'
alias cdd='cd /cygdrive/d'

[Update]
Cygwin doesn’t have cls out of the box. But with ncurses installed, you can use the commmand tput clear to do the same thing. I aliased it to cls: (thanks mikyra)

alias cls='tput clear'

[Update 2]
There’s a terminal command open in OSX that is quite nice, it essentially does a shell execute on whatever you pass it. Cygwin has something similar called cygstart, but that’s not a nice name. (thanks erichui).

alias open='cygstart'

I’ve also added an alias to reload my bash profile (so that later on I can edit it, and the see my changes easily).

alias reload='source ~/.bash_profile'

Finally, I like to add that bin folder I created to the path:

export PATH="${HOME}/bin:${PATH}"

Acknowledgements

  1. Taylor McGann’s blog was useful in showing me how to do a bunch of the git prompt, git completion, and fancy bash profile stuff.
  2. This StackOverflow post on apt-cyg.
  3. This StackOverflow post on clearing the screen.
  4. This StackOverflow post on cygstart.
  5. Jon L in the comments.

ASP.NET MVC4 Binding to a List of Complex Types

UPDATE: I have an idea how they might be doing it: Expression Trees!

One of my colleagues and I ran into a little trouble with Model Binding in ASP.NET MVC 4 yesterday. We wanted a form to post back an IEnumberable of a complex type, let’s call it IEnumberable. Doing this naively didn’t work. After a bit of googling, and experimentation we found out how to get what we wanted, but the answer was a bit quirky: you must use an array/collection index operator inside your Html.Whatever() lambda, otherwise the Html helper doesn’t know that it should put an array index at the front of the field names in your form. If it doesn’t put this index (in the form of “[0].”, “[1].”), then the MVC model binder can’t work out how to group your fields into instances of a complex type.

Continue reading “ASP.NET MVC4 Binding to a List of Complex Types”

Porting Open Asset Import Library (Assimp) to WinRT (4)

So, I changed my mind about the whole zlib vs System.IO.Compression, partly because the latter is not already a Windows Runtime accessible type (afaik), and secondly because the code would not be at all portable, and I’d like to be able to contribute my work back to the community if I can. I’ve run into a couple of other problems though, one to do with Microsoft’s war on Buffer Overrun exploits (fair enough, Windows XP was a real problem that way), and the other to do with the security constraints that come with Windows Store apps.

Buffer Overrun / Overflow

First, the whole buffer overrun thing: it seems that Microsoft have deprecated large portions of the C Runtime on windows (fopen, strcpy, pretty much anything that takes a potentially unsanitary char* as an argument, and writes the data contained therein to another unsanitary char* argument), in favour of the ‘safe’ variants of these functions (which only exist on windows). The gist here is that you need to tell it how long the destination buffer is, so that the function will not overrun the buffer. I can understand why they did this, after the whole security shake-up they had during the Vista development cycle (rumoured to be the reason it took so long). Incidentally, we might see Apple making similar moves after this little hiring.

Apparently these warnings have existed since Visual Studio 2005, but with Windows Store apps, the warnings about these functions have become errors. You can turn them off by defining _CRT_SECURE_NO_WARNINGS, but I’m not sure if I want to. On the one hand, it does concretely close a potential attack vector in any code I write. On the other hand, Microsoft’s series of _s functions (fopen_s, strcpy_s, wcstombs_s, etc) are not part of Standard C, and are thus not portable either. Some people have suggested that if one is re-writing C code to use these functions, they should just re-write it in C++ and use the iostream classes instead, which is great, except most OSS newbies like me don’t have the clout to get a C++ re-write of zlib accepted. So, I’m left with two options: write code that Microsoft have deemed they don’t want running on their OS, or write ugly, macro-heavy wrappers so that the code uses fopen_s on windows, and fopen elsewhere. I’m going to try the second option for now, and see how far that gets me. Don’t be surprised if I blog in the future about how ssimp core team won’t let me commit my code.

Windows Store App Security Model

The next issue that I’ve encountered is more interesting, and it has risen out of the unique space that Windows RT fills in the market, as more than a tablet, but a little bit less than a full-blown Windows PC. If we look at the iPad/iPhone, and Windows Phones as well (I’m, not even going to discuss Android, it’s just wrong to put Linux on a phone), each app is a completely walled garden, except for a few API’s which give access to things like, Pictures, Emails, Contacts – your personal data that apps can add value to. The thing about accessing this data is, as I mentioned before, it is done through and API, you don’t just go poking around the file system opening files as you please. Furthermore, when these apps do try to access the file system using direct, fairly low-level calls, like fopen, they can only see within their own little sandbox. This is fine on a device whose paradigm is that Apps only work on their own data, but Windows is a bit different. Windows is traditionally a desktop OS, I have a Documents Library, and a Pictures Library, and a Downloads Library, and my files are a thing in themselves which transcend my current set of Apps. So, you can access the file system, but it has to be through an API (StoreageFile) if it isn’t within your sandbox, and you can’t even ask for files outside your sandbox unless the paths have come from a FilePicker. I think this model is sound – an app can’t touch anything it didn’t create without your very explicit permission. My goal, however, is to write a Windows Store app version of Assimp Viewer, and so I have a need to use fopen on files outside my sandbox, which was a bit of a conundrum. Luckily though, I’m not blazing a trail here, and I found this post which details a neat strategy which lets you use the StoreageFile API to bring the file into (and out of I presume) your sandbox, so that you can use your low-level C code on it. This shouldn’t be too hard for me to get into the app, because AssimpView is already a Windows only application.

So, my current status is that I’m testing out my build of zlib for a Windows Store App, and it seems to work thus far. I think building the assimp library itself might not be so complicated, because, today’s points aside, any ANSI C code should “just work” when compiled as a Windows Store App static (or dynamic) library.

Porting Open Asset Import Library (Assimp) to WinRT (3)

So I’ve been doing a bit of digging about in the Assimp source, and it seems that, for the core library at least, the only dependency is on zlib – which it builds in your solution on Windows. My next possible step, then, could be to move the code from this zlib project over to a Windows Runtime Component project. However, I’m a little reticent to do this, since the System.IO.Compression namespace can do all the compression stuff, and to an extent it depends on whether it is easier to port zlib, or modify Assimp so that it doesn’t use zlib. My hope is that I can keep my changes to Assimp isolated to a fairly low level in the codebase, so that it is reasonably easy to keep in harmony with their continuing efforts, and porting zlib seems more likely to achieve that goal.

Digging further, I have found that at present, the zlib functionality is only called by the ‘unzip’ library that also comes with Assimp, which itself is currently only used by the Q3BSPZipArchive class. So, I think I might put in an alternate code-path in either the unzip code, or perhaps even the Q3BSPZipArchive to use the Windows Runtime code. I suspect that modifying unzip.c would be the best approach, because then anyone who writes a custom importer using the Q3BSP one as a template will be able to use my functionality.

It’s settled then, I’ll make an alternate unzip.c (or pre-compiler path therein) which uses the native Windows Runtime API to handle the unzipping. Not tonight though.