A nice discussion of Plantinga’s Modal Ontological Argument

•April 27, 2020 • Leave a Comment

2020-04-27 – After quite a long absence from youtube, Theoretical Bullshit — aka Scott Clifton, daytime Emmy Award winning actor on The Bold and the Beautiful — no, I am not making that up — has returned with a nice, thorough discussion of Plantinga’s Modal Ontological Argument. The man has far more patience than I do. And I’m pretty sure he’s smarter than I am too. And better looking. And younger.

Well worth watching if you’re into this sort of thing, but it is a bit esoteric.

SDL and sane fullscreen alt-tab behavior on linux

•April 22, 2020 • Leave a Comment

2020-04-22 – With SDL2, by default using full screen windows the behavior of alt-tab and alt-ctrl-left and alt-ctrl-right is quite insane. Whenever the window loses focus it minimizes. This puts it at the bottom of the window stack. This means that when you alt-tab again to get back to your window, instead you get a different window. And so you have to keep alt-tabbing through all your windows until you finally get to the window you just alt-tabbed away from and which is now sitting idiotically at the bottom of the stack of windows.

There’s a way around this.


Then run your program. Now you have sane window behavior. You can also put this inside your program to make the default be sane:

      setenv("SDL_VIDEO_MINIMIZE_ON_FOCUS", "0", 0);

The zero at the very end means that if the user has already set this to something else (e.g. 0 in case your user happens to be insane, and likes insanity), you won’t override it. But nobody will get insanity by default.

SDL2 fixed aspect ratio window on Linux

•April 21, 2020 • Leave a Comment

Here’s how to get a fixed aspect ratio window using SDL2 on linux. When it comes to getting a fixed aspect ratio window, SDL2 just leaves you to die. So you have to go behind its back and talk to the window manager. This will be different on Windows, Mac, and Wayland. Here I only consider the X11 situation. On linux, we can give the X11 window manager a hint.

#ifdef __linux
/* This is for constraining the aspect ratio of the window since SDL2 left us to die. */
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <X11/Xos.h>
#include <SDL2/SDL_syswm.h> /* for SDL_GetWindowWMInfo() */

static void constrain_aspect_ratio_via_xlib(SDL_Window *window, int w, int h)
#ifdef __linux
        SDL_SysWMinfo info;
        Display *display;
        Window xwindow;
        long supplied_return;
        XSizeHints *hints;
        Status s;

        if (!SDL_GetWindowWMInfo(window, &info)) {
                fprintf(stderr, "SDL_GetWindowWMInfo failed.\n");

        if (info.subsystem != SDL_SYSWM_X11) {
                fprintf(stderr, "Apparently not X11, no aspect ratio constraining for you!\n");
        display = info.info.x11.display;
        xwindow = info.info.x11.window;
        hints = XAllocSizeHints();
        if (!hints) {
                fprintf(stderr, "Failed to allocate size hints\n");
        s = XGetWMSizeHints(display, xwindow, hints, &supplied_return, XA_WM_SIZE_HINTS);
        if (s) {
                fprintf(stderr, "XGetWMSizeHints failed\n");
        hints->min_aspect.x = SCREEN_WIDTH;
        hints->min_aspect.y = SCREEN_HEIGHT;
        hints->max_aspect.x = SCREEN_WIDTH;
        hints->max_aspect.y = SCREEN_HEIGHT;
        hints->flags = PAspect;
        XSetWMNormalHints(display, xwindow, hints);

Then just call constrain_aspect_ratio_via_xlib() and pass it a pointer to your SDL_Window and the width and height ratio you want to enforce.

When you compile and link your program you’ll need some extra arguments:

X11LIBS=$(shell $(PKG_CONFIG) --libs x11)
X11CFLAGS=$(shell $(PKG_CONFIG) --cflags x11)

Typically X11CFLAGS comes out empty, and X11LIBS comes out to “-lX11”, so you can just add “-lX11” to the end of your linker flags most likely.

There’s a stackoverflow question here that has some hints but doesn’t go into a lot of detail. I would have updated that question, but stackoverflow has been inhabited by assholes since about 2009, so, nope.

Adding Voice Chat to Space Nerds in Space

•April 2, 2020 • Leave a Comment

2020-04-02 — Here’s a little description of what it took to add voice chat to Space Nerds in Space.

For audio in general, I am using portaudio, which is a somewhat low level sound library. It is low level in that it requires you to write your own mixer code, and does not provide primitives for playing WAV files or Ogg files or anything like that. You write a callback function which portaudio calls at a specified frequency and it’s the job of this function to provide a buffer of audio data for portaudio to play during the next little bit of time. If you want to play multiple sounds concurrently, this function must keep track of where in each sound we currently are, and mix them together and provide portaudio the mixed fragment of audio data on each callback.

I had long used portaudio for this, and had built up a small audio library around it that does provide very basic features like allowing simple triggering of playback of particular preloaded sounds, mixing however many such sounds as required, etc.

But for voice chat, I needed more than just simple playback of preloaded sounds.

  1. We need to make sure the number of concurrent streams sent to a client from from the server does not exceed the number of streams the client can handle.
  2. We need the ability to record sound, and receive a stream of audio data from the microphone as a series of callbacks.
  3. We need the ability to compress these packets and send them to my server process. For this I used libopus.
  4. We need the server to forward these packets to destination clients, keeping in mind that there might be multiple clients streaming audio to the server, and these would need to be then fanned back out to the destination clients. The streams would need to be kept separate though, because the decompressor is stateful, and you can’t combine multiple streams of packets and send them through a single decompressor instance. They each need their own decompressor.
  5. At the clients, we need to receive the audio data streams from the server and decompress them, and queue them up for the mixer to chew on.

To ensure that clients never receive more streams of audio than they can handle, a token system is used.

There is one token for each audio channel the clients are able to handle (nominally, there are 4 of them.) Just before clients begin recording and transmitting audio data to the server, they request a token from the server. They then transmit to the server (whether or not they eventually get the token) knowing that if they don’t get a token, the server will just drop their packets. The server has a fixed number of token which it assigns to the clients as they ask for them until they are all in use. If the server receives any audio packets from clients which it knows do not have a token, it just drops those packets. In this way, the server never transmits more streams of audio than the clients can handle.

Recording audio

Recording is triggered by a keypress event, and terminated by a key release event, as it is a “push to talk” system. Pressing the key requests a token from the server (but doesn’t wait for it to be given) and starts the recording process, which sets up a portaudio thread reading from the microphone and periodically calling back a function passing along the PCM audio data that was recorded 1920 samples at a time (at a sampling rate of 48000 Hz). (I chose 1920 and 48kHz because these are reasonable values supported by libopus. This did mean I had to resample my existing audio files from 44.1kHz to 48kHz.)

We transmit without waiting for the token from the server so that the instant the token is given (before the client even receives it) the server may begin accepting audio packets from the client. Also it’s easier to code as we do not need to write any code to wait for the token.

The code for that looks like this:

        if (event->keyval == GDK_F12) { /* F12 key pressed? */
                        if (!have_talking_stick) {
                        } else {
                /* We transmit regardless of whether we have a talking stick.
                 * If we do not have it, snis_server will drop our messages */
                if (control_key_pressed)
                        voice_chat_start_recording(VOICE_CHAT_DESTINATION_ALL, 0);
                        voice_chat_start_recording(VOICE_CHAT_DESTINATION_CREW, 0);

voice_chat_start_recording() ultimately
up a portaudio thread to begin recording data
and calling the recording_callback
function described below.

This recording callback function cannot directly just compress and transmit the data to the server, as it might conceivably fall behind the recording process if say, writing to the network socket blocks or is slow. So it puts the data into an “outgoing” queue and returns. It looks like this:

static void recording_callback(void *cookie, int16_t *buffer, int nsamples)
        if (nsamples != VC_BUFFER_SIZE)
        recording_buffer.nsamples = nsamples;
        if (recording_audio)
                recording_level = get_max_level(&recording_buffer);
                recording_level = 0;
        enqueue_audio_data(&outgoing, recording_buffer.audio_buffer, recording_buffer.nsamples,
                        recording_buffer.destination, recording_buffer.snis_radio_channel);

When the “transmit” key is released, the portaudio recording thread is stopped, and if the client is in possession of any token, it is released to the server where it may then be handed out again to whichever client asks for a token.

        if (event->keyval == GDK_F12) {
                voice_chat_stop_recording(); /* This shuts down the portaudio thread that was recording. */
                /* We release even if we don't have, snis_server will know the real deal. */

Compressing audio

There is then another thread that consumes audio data from this outgoing queue, compresses it with libopus, then sends it on to the server. The meat of that function looks like this:

        while (1) {

                /* Get an audio buffer from the queue */
                b = dequeue_audio_buffer(q);
                if (!b) {
                        rc = pthread_cond_wait(&q->event_cond, &q->mutex);
                        if (q->time_to_stop) {
                                goto quit;
                        if (rc != 0)
                                fprintf(stderr, "pthread_cond_wait failed %s:%d.\n", __FILE__, __LINE__);
/* ... */
                /* Encode audio buffer */
                len = opus_encode(encoder, b->audio_buffer, VC_BUFFER_SIZE, b->opus_buffer, OPUS_PACKET_SIZE);
                if (len < 0) { /* Error */
                        fprintf(stderr, "opus_encode failed: %s\n", opus_strerror(len));
                        goto quit;

                /* Transmit audio buffer to server */
                transmit_opus_packet_to_server(b->opus_buffer, len, b->destination, b->snis_radio_channel);

The function that does the compression is opus_encode(), and transmit_opus_packet_to_server() transmits the compressed audio to the server.

Receiving and routing the audio on the Server

When the server receives a packet of compressed audio from a client, it knows which client it came from (because of which socket it came in on and which thread is monitoring that socket), and which token, if any that client currently possesses (because if the client has a token, it’s because the server gave the client the token and remembers which one, or if it didn’t give it one).

If the client does not have a token, the packet is dropped. If it does have a token, then this token determines which of the 4 audio channels this data belongs to, and the data is fanned out to the destination clients along with the token number. The client which sent the data is generally excluded from receiving its own audio data back, as there’s no point in repeating back to them what they just said but with a slight delay.

That code looks like this:

        if (c->talking_stick == NO_TALKING_STICK) {
                /* Client does not have talking stick. */
                return 0;
        /* Ignore audio chain from client, it put NO_TALKING_STICK there anyway 'cause it doesn't know */
        audio_chain = c->talking_stick;
        pb = packed_buffer_allocate(10 + datalen);
        packed_buffer_append(pb, "bhbwhr", OPCODE_OPUS_AUDIO_DATA,
                                (uint16_t) audio_chain, destination, radio_channel, datalen, buffer, datalen);

        /* Don't send a client's own audio back at him. */
        except.nclients = 1;
        except.client[0] = c - &client[0];
        except.shipid[0] = c->shipid;

        switch (destination) {
                send_packet_to_all_clients_on_a_bridge_except(c->shipid, pb, ROLE_ALL, &except);
        case VOICE_CHAT_DESTINATION_CHANNEL: /* TODO: implement radio channels */
                send_packet_to_all_clients_except(pb, ROLE_ALL, &except);
                fprintf(stderr, "Unexpected destination code %hhu in opus audio packet\n", destination);
                return -1;

Decompressing and playing back data

When the client receives audio data, it is put into an “incoming” queue. The data is accompanied by a token number.

void voice_chat_play_opus_packet(uint8_t *opus_buffer, int buflen, int audio_chain)
        if (buflen > VC_BUFFER_SIZE)
                buflen = VC_BUFFER_SIZE;
        if (audio_chain < 0 || audio_chain >= WWVIAUDIO_CHAIN_COUNT)
        enqueue_opus_audio(&incoming, opus_buffer, buflen, audio_chain);

The “incoming” queue is consumed by a thread for decoding the audio packets. The thread uses the token number for each audio packet to determine which of the 4 opus decoders (decompressors) is used to decompress the data. The opus decoders are stateful, and their state depends on previously decoded packets, so it is important not to interleave packets from different clients into a decoder.

Once the data is decompressed, it is appended to one of the 4 chains of VOIP audio data the mixer consumes according to the token number.

The meat of that code looks like this:

        while (1) {

                /* Get an audio buffer from the queue */
                b = dequeue_audio_buffer(q);
                if (!b) {
                        rc = pthread_cond_wait(&q->event_cond, &q->mutex);
                        if (q->time_to_stop) {
                                goto quit;
                        if (rc != 0)
                                fprintf(stderr, "pthread_cond_wait failed %s:%d.\n", __FILE__, __LINE__);

                /* decode audio buffer */
                i = b->audio_chain;
                len = opus_decode(opus_decoder[i], b->opus_buffer, b->nopus_bytes, b->audio_buffer, VC_BUFFER_SIZE, 0);
                if (len < 0) {
                        fprintf(stderr, "opus_decode failed\n");
                        goto quit;
/* ... */
                playback_level = get_max_level(b);

                /* If it's been a couple seconds since we've seen data on this chain then
                 * inject 100ms of silence ahead of the data to put the mixer 100ms behind
                 * it so that if there's jitter or some space between subsequent packets,
                 * there's a little bit of slack before the mixer runs out.
                mcc = wwviaudio_get_mixer_cycle_count();
                difference = mcc - last_mixer_cycle_count[b->audio_chain];
                if (difference > (4 * 48000) / VC_BUFFER_SIZE && difference < (unsigned int) 0xfffff000)  {
                        /* > about 4 seconds at VC_BUFFER_SIZE samples per mixer cycle */
                        /* < 0xfffff000 to avoid hiccup at mcc wraparound */
                        wwviaudio_append_to_audio_chain(short_silence, ARRAYSIZE(short_silence),
                                                        b->audio_chain, NULL, NULL);
                last_mixer_cycle_count[b->audio_chain] = mcc;

                /* Let the mixer have the data */
                wwviaudio_append_to_audio_chain(b->audio_buffer, len, b->audio_chain, free_audio_buffer, b);

Mixing the audio

The mixer mixes several (about 10-20) channels of data dedicated for preloaded sound effects and 4 channels of VOIP data. The VOIP data is in the form of a linked list. When one chunk of audio data is consumed the mixer calls a callback function associated with that data (typically used to free the buffers containing the data) and then the mixer moves on to the next chunk in the linked list. Data may be appended to the list at any time by the thread doing decompression of audio data incoming from the network.

The mixer function is quite complex, but it’s here.

Hunting NaNs

•November 19, 2019 • Leave a Comment

2019-11-19 — Space Nerds in Space — Hunting NaNs

What are NaNs? A better question might be what aren’t NaNs. NaN stands for “Not a Number”, and NaNs are special floating point values that can occur when you attempt to do things like divide by zero or other “impossible” mathematical operations. Most typically in my experience, NaN generation due to dividing by zero happens when you attempt to normalize a vector with zero magnitude. Once NaNs are generated, they typically spread through any calculation in which they are used, corrupting all sorts of calculations and causing weird behavior for anything depending on those calculations (most typically, NPC ship movement.)

Since Space Nerds in Space seems to work pretty well for the most part, I sort of presumed that NaNs weren’t really something I was bumping into much — that my code was correct enough that I wasn’t generally dividing by zero much if at all. And I suppose that must have been mostly true — mostly true in that the game did work well enough, seemingly. But then I tried actually taking a look to see whether any NaNs were scurrying around down in there.

Oh my god… So many NaNs! So the way that you figure out whether you’ve got NaNs is to enable floating point exceptions to terminate your program and produce a core dump. The code to do so looks like this:


Then you compile everything without optimization and run it, and see what blows up. Well things blew up *immediately*, leaving a core file for debugging. For a day or so, this cycle continued: Compile, run, explode, debug, fix. Many NaNs knew what it was to be roasted in the depths of the core that day, I can tell you!

After awhile, I slew enough NaN generating bugs that things no longer exploded immediately, but required several minutes of running before exploding. Now, I think I only have one NaN bug left (that I know of). And I have a fix for that one, I’m just not certain my fix is really right, so I want to think about it some more before I commit it (or some better fix).

In any case, if you’re interested in the gory details, you can check out the bug report on github: https://github.com/smcameron/space-nerds-in-space/issues/236

Adding Interactive Fiction into Space Nerds in Space

•August 3, 2019 • Leave a Comment

2019-08-02 — In thinking about how to make Comms more interesting and fun, taking into account that it is almost completely a text based interface, the idea had occurred to me to put some interactive-fiction like elements in. For example, maybe there’s a mission in which you find a derelict ship and you send a robot over to investigate, commanding it via Comms in the manner of Zork and those old infocom games.

Now I already have a Zork-like parser built into the game for “the computer”, however that’s in C, and I don’t want to build it into the game just for a particular mission. And exposing that to Lua seems like a lot of work. Like more than I want to bite off. And not really knowing Lua particularly well, the idea of writing interactive fiction in Lua directly seemed somewhat daunting and unappealing. So, for a long time, I just kind of shelved this idea as being sort of “not worth it.” But lately I started thinking about it again, and thought, well, maybe I should dig into Lua a bit. So this morning I started looking at an old and very small toy project I had done in python that implemented some basic interactive fiction features. I had done that project as a way of learning python. I decided to see what it would take to do the same thing in Lua. Turns out Lua’s tables are not all that different than python’s dictionaries, similar enough that porting this python code to Lua was actually pretty straightforward, and the resulting code is surprisingly similar to the python code. You can see it here: smcamerons-lua-adventure The python one is here: smcamerons-python-adventure.

So with that under my belt, It should be totally possible to write a Lua mission script that incorporates these interactive-fiction ideas so I can make this concept of Comms directing a remote robot around on a derelict ship a reality. Probably want to keep it short, maybe break it up into several different segments, I don’t want to make a mission that amounts to “Drive the ship over there and then play Zork for 3 hours,” but I think this idea has a lot of potential to make Comms more interesting, and it’s good for me to get a little more proficient with Lua.

And after a day or so more tinkering, I have a proof of concept working with COMMS. You can try it out by building and running the latest Space Nerds in Space code and typing “testintfic” on the DEMON screen, then switching to COMMS, and changing the channel to 1234.

Here’s a pic (click on it for embiggening):

Ubuntu 14.04 stopping network-manager

•July 21, 2019 • Leave a Comment

2019-07-21 — Just writing this down in case I forget.

I wanted to set up a private 192.168.1.* network with some laptops not connected to the internet for reasons. No problem, let’s manually configure some IPs. Go to network manager, and… I can put in all the stuff, but it won’t let me save. Google around a bit, oh, I have to set ipv6 to ignore. Nope. There doesn’t seem to be any way to save a manual configuration?

Same problem on both Ubuntu 14.04 and Mint 13. How the fuck did this pass QA?

Ok, fine, let me use ifconfig to set up my network.

ifconfig eth1 up

Ha, that seems to work fine. Oh, wait, no it doesn’t, here comes network manager to clobber it and shut off my newly ifconfig’ed network interface.

Ok, fine, how do I stop network manager?

Internet says:

/etc/init.d/network-manager stop

Nope, that doesn’t work. That says that I should use:

service network-manager stop

And that doesn’t work either:

stop: unknown instance

And it seems there’s a bug for this and it’s just totally FuXOR’ed.


Ok, time for the big hammer:

cd /usr/sbin
sudo mv NetworkManager NetworkManager.orig
/sbin/init 6

Now let’s see if network-manager can do jack shit when systemd or whatever the fuck can’t even find network-manager.

And now I can ifconfig in peace, without network manager sabotaging my equipment every few seconds.

And when I want my laptop back to normal, I can just mv NetworkManager.orig back to NetworkManager and reboot.

God damn, network manager is a piece of shit.

Typical Christian

•July 11, 2019 • 1 Comment

2019-07-10 — Got this comment yesterday…

Typical ignorant cowardly Christian fuckwad. “I’m a Christian bitch”, lolz. Either he’s too illiterate or too in the closet to put a comma in there.

Banned in Pakistan

•December 9, 2018 • 4 Comments

2018-12-09 — Banned in Pakistan


A Pakistan authority has demanded that we disable the following content on your WordPress.com site:


Unfortunately, we must comply to keep WordPress.com accessible for everyone in the region. As a result, we have disabled this content only for Internet visitors originating from Pakistan. They will instead see a message explaining why the content was blocked.

Visitors from outside of Pakistan are not affected.
— Begin complaint —
Dear WordPress Team,

I am writing on behalf of Web Analysis Team of Pakistan Telecommunication Authority (PTA) which has been designated for taking appropriate measures for regulating Internet Content in line with the prevailing laws of Islamic Republic of Pakistan.

In lieu of above it is highlighted that few of the web pages hosted on your platform are extremely Blasphemous and are hurting the sentiments of many Muslims around Pakistan. The same has also been declared blasphemous under Pakistan Peal Code section 295- B and is in clear violation of Section 37 of Prevention of Electronic Crime Act (PECA) 2016 and Section 19 of Constitution of Pakistan.

Oh noes, I have hurted someones feelings.

Lolz. Guess I won’t be visiting Pakistan any time ever.

Rocket Launch as Viewed from Space

•November 24, 2018 • 2 Comments

November 24, 2018 — Rocket launch as viewed from space.

via metafilter.