Category Archives: Haxe

Introducing snake-server, a simple local web server for Haxe

Many programming languages, or their communities of library authors, offer a way to quickly start a web server in a specific local directory with just one simple command. For instance, Python users can run python3 -m http.server 8000, or Node.js users can install the http-server module globally and run http-server -p 8000. For reference, here’s a big list of http static server one-liners for a variety of languages and their run-times.

When working in Haxe, and compiling to JavaScript in HTML pages, I often need to launch an HTML file in a web browser. It’s always best to use a server instead of double-clicking an HTML file in your operating system’s file explorer because web browsers often behave differently with local files versus content originating from web servers. I typically reach for the Node.js command that I mentioned above. Similarly, the command line build tools used by OpenFL also use the same http-server module with Node.js when you run the lime test html5 command to run your project in a browser.

As a Haxe developer, I can’t help but feel like the Haxe community should be able to reach for a pure Haxe solution that starts a local web server, and it shouldn’t rely on another language’s tooling.

Last week, with that idea in mind, I started working on snake-server 🐍, a Haxe library (built on sys.net.Socket from the Haxe standard library) that can be used to start a web server in a specific directory for local development — all with one quick command.

haxelib run snake-server

It uses port 8000 on local IP 127.0.0.1 with protocol HTTP/1.0 by default, in the current working directory. However, there are command line options for customizing all four of those things.

Install snake-server with a one-time command:

haxelib install snake-server

It’s called snake-server because I actually ported Python’s http.server and socketserver modules to Haxe. To be clear, not as externs, but as pure Haxe that works on any sys target, including the Haxe interpreter, HashLink, hxcpp, and Neko. I figured an official Python module would have the most important edge cases covered, and I could see it was created with a similar raw TCP socket API that’s available in Haxe, so it would be easy enough to port that existing code to Haxe and get a solid solution working as quickly as possible.

The server handles each request in a separate thread. I skipped this part at first because it didn’t seem necessary. However, as I finished implementing HTTP/1.1 support, I realized it was vital because keep-alive requests would block on the final socket read, until timeout, which would block other socket writes from completely flushing, which made pages appear to partially load for several seconds, and then finish all at once after the timeout delay. Each request in a separate thread made everything snappy while supporting the keep-alive optimization.

In your terminal, you can exit the server with the standard Ctrl+C keyboard shortcut. Occasionally, if you exit snake-server, and then try to start snake-server again on the same port too quickly, it’ll say that the port is still in use. It should become available within seconds, though. I think other languages may provide a way to ensure this doesn’t happen, but I’m not sure if that’s catching the Ctrl+C interrupt signal, or if it involves tweaking the socket configuration in some way. I don’t think Haxe exposes what I need, but maybe I’m wrong? Let me know if you know how to fix this issue.

The snake-server source code is on GitHub. Bug fix PRs are welcome, especially for things that behave differently than the original Python implementation. However, please create an issue to ask before submitting new feature PRs. I plan to keep the Haxe code as close to the original Python code as I possibly can, to avoid future maintenance headaches if I port important changes from future Python updates.

I hope that the Haxe community finds snake-server to be useful for their everyday web development needs. If you like snake-server, please consider a monthly donation towards my open source contributions on my GitHub Sponsors page. Thank you!

OpenFL devlog: HashLink/C compilation

I thought I’d share another update about the new features that I’ve been working on recently for the next versions of OpenFL and Lime. In today’s post, I’m going to talk a bit about expanding OpenFL’s HashLink target to support not just HashLink/JIT, as it has for a while, but also add support for HashLink/C on Windows, macOS, and Linux.

If you’re not familiar, OpenFL is an implementation of the APIs available in Adobe Flash Player (and Adobe AIR) using the Haxe programming language. Projects built with Haxe can be cross-compiled to JavaScript, C++, and many other targets — making it available on the web, desktop apps, mobile apps, game consoles, and more. No browser plugins required, and fully native C++ performance.

HL/JIT vs HL/C

HashLink is a virtual machine for the Haxe programming language. As I understand it, HashLink is considered similar to Adobe AIR in many ways. It has a bytecode format with JIT and garbage collection, which is compiled from a high-level language with a mix of object-oriented and functional programming paradigms. Its APIs are designed to be cross-platform — including window management, graphics, and audio. And it can be extended by loading native code libraries and exposing them to higher-level code.

One interesting thing about HashLink is that it can compile to two different formats. As mentioned above, HashLink supports its own bytecode format, which makes compilation fast and runtime performance relatively decent for day-to-day development. However, it can also cross-compile to low-level C code, which you can compile into fully native machine code with your favorite C compiler on a variety of operating systems (there are HashLink games running on desktop, mobile phones/tablets, and even game consoles). The HL/C workflow may take a bit longer for each individual build, but runtime performance is even better than with HL/JIT. Best of all, your Haxe code will behave exactly the same either way, whether compiled to bytecode or to C.

As of Lime 8.1 and OpenFL 9.3, the HashLink target supports only HL/JIT. As I mentioned, the HL/JIT target performs relatively well and offers a nice development workflow. It’s perfectly fine to release apps using HL/JIT, if it meets your needs, so that made HL/JIT good enough for previous OpenFL contributors to implement as a starting point. After a bit of research, I figured it wouldn’t be too difficult to support HL/C as well, so I started working on it for the upcoming Lime 8.2 release.

Step 1: Haxe compilation for HL/C

From the Haxe side, switching between HL/JIT and HL/C is a minor change. The example below compiles to bytecode for HL/JIT:

haxe -hl bin/MyApp.hl -main MyApp

All you need to do is change the output file extension from .hl to .c when calling the Haxe compiler:

haxe -hl bin/MyApp.c -main MyApp

I added a new lime build hlc command to the Lime tools (or lime build hl -hlc, if you prefer), and that makes Lime generate HL/C code instead of HL/JIT bytecode.

Step 2: C compilation for HL/C

However, an extra step is required after that. Once you have generated some C code, it needs to be passed to a C compiler. Since macOS is my current primary platform, I decided to start there and see how easily I could compile HL/C with the gcc compiler included with Apple’s Xcode.

The Haxe and HashLink documentation both provide pretty good starting points for how to compile the C code. I ended up with something similar to the following command (some file system paths have been simplified a bit, to keep the full command from being overly long):

gcc -O3 -o bin/MyApp -std=c11 -I obj obj/MyApp.c lib/libhl.dylib lib/fmt.hdll lib/mysql.hdll lib/sqlite.hdll lib/ssl.hdll lib/ui.hdll lib/uv.hdll lib/lime.hdll

Basically, you run gcc with a high level of optimization, using the 2011 version of the C language, and including all of the C code and headers generated by Haxe. Additionally, the executable should load libhl.dylib (the core HashLink dynamic library), and a number of .hdll files (other dynamic libraries for HashLink).

On macOS, executables are usually wrapped in .app bundles that macOS treats as a single file, even though it’s more like a directory containing many files. Lime already generated an .app bundle for HashLink/JIT, and that required a bit of finessing to work properly with HashLink/C. The executable generated by gcc was having some trouble finding libhl.dylib and the .hdll files when they were all bundled together inside the .app bundle. I ended up using install_name_tool to force the executable to find the dynamic libraries in the same directory as itself using the @executable_name directory value in the paths. The command looks something like this:

install_name_tool -change lime.hdll @executable_path/lime.hdll bin/MyApp.app/Contents/MacOS/MyApp

Figuring out exactly which install_name_tool sub-command to use (and determining even if that was the correct approach) took a bit of trial and error. To add to the complexity, I was seeing slightly different behavior between the version of HashLink that Lime bundles, and a nightly build of HashLink that I downloaded from the source. Even though Lime bundles HashLink, we also want to support new updates or custom builds, whenever possible. In the end, I got both the bundled and custom versions working, and I was ready to jump to other operating systems.

I thought Linux would be super easy after macOS, since it would also use gcc and both operating systems are Unix-ish. However, I quickly ran into similar issues where the executable couldn’t find the dynamic libraries. After a few hours of banging my head on the desk, I decided to take a break and try Windows instead, and I’d revisit Linux later. Sometimes, when I get stuck, allowing my brain do some background processing for a few hours (or days) can yield surprising successes.

Compiling HL/C on Windows had its own issues, of course. I initially settled on requiring gcc on Windows too, even if it’s not the default C compiler for most Windows developers. I tried to support Visual Studio’s C compiler, but it was failing due to a Haxe compiler issue that I needed to report (more on that in a second).

Anyway, it’s relatively easy to get gcc for Windows thanks to the MinGW-w64 project. I ended up installing it quickly in a terminal with the Chocolatey package manager.

choco install mingw

You could probably also get it with the Scoop package manager, or download the MinGW executables/installer directly from the source.

The gcc command for Windows was very similar. As far as differences go, I mainly needed to set the “windows” subsystem instead of the default “console” subsystem so that an extra terminal didn’t open when double-clicking the .exe file, and I needed to add dbghelp.dll as a dependency, for some reason (if you understand why, I’d be curious for an explanation or workaround). Thankfully, I didn’t need any extra commands similar to install_name_tool on Windows to get it to find the .hdll files.

gcc -O3 -o bin/MyApp -std=c11 -Wl,-subsystem,windows -I obj obj/MyApp.c C:/Windows/System32/dbghelp.dll lib/libhl.dll lib/fmt.hdll lib/mysql.hdll lib/sqlite.hdll lib/ssl.hdll lib/ui.hdll lib/uv.hdll lib/lime.hdll

I was actually able to get a cl.exe (Visual Studio’s C compiler) command working locally, if I did a find/replace for _restrict (which is generated from OpenFL’s TextField.restrict property) on the output C code. It seems that cl.exe treats some things differently than gcc. I submitted a bug report to the Haxe compiler (which has been partially addressed, which I’ll talk about in a second), and I declared MinGW support on Windows good enough to start with.

After I got Windows with MinGW working, I went back to Linux. One morning, I recalled reading about something called rpath, which is used to find dynamic libraries on both macOS and Linux. However, since I hadn’t used it on macOS, I didn’t think much of it when I saw it mentioned with Linux too. This time around, I decided to use rpath as my starting point for web searches, and eventually, I found some random example that mentioned an $ORIGIN value that could be passed to the linker options that worked similarly to @executable_name on macOS.

I also had to specify the libraries a bit differently using -L for the parent directory and listing the .so and .hdll files by name instead of full paths.

gcc -O3 -o bin/MyApp -std=c11 -Wl,-rpath,$ORIGIN -I obj obj/MyApp.c -L lib libhl.dylib fmt.hdll mysql.hdll sqlite.hdll ssl.hdll ui.hdll uv.hdll lime.hdll

I tried doing the same thing with rpath and -L on macOS, to make the commands more consistent, but apparently, gcc (or something else in the operating system) works differently with dynamic libraries, so the macOS command(s) had to remain different from Linux. Not a big deal, but I just wish they were more similar.

At this point, I had a few extra moments to spare, and I realized that I could probably allow HL/C code to be compiled with clang instead of gcc too. It turns out that I was right, and the same command line options worked without changes. Only the command name changed from gcc to clang. At least on macOS and Linux. I ran into some strange issues with certain Windows APIs not being recognized, for some reason, and I honestly don’t know enough about clang (or C development in general) to know how to fix that. Perhaps someone else can contribute that, and we’ll have a full range of compilers supported on all platforms.

There was one other detail that came up. On Apple Silicon CPUs for macOS, gcc and clang default to compiling for Apple Silicon, but currently, Lime’s and HashLink’s .hdll libraries are strictly for Intel CPUs. So I needed to precede the gcc command with arch command to ensure that it compiles for Intel to be compatible with the libraries.

arch -x86_64 gcc [options]

Lime and HashLink for Intel both work well on Apple Silicon with Rosetta, by the way. However, I don’t doubt that we’ll revisit full Apple Silicon support in the future.

After I submitted the bug report about certain HL/C code failing to compile with Visual Studio, the Haxe compiler team fixed the issue right away (at least, partially) and released Haxe 4.3.3 soon after. Most vanilla OpenFL apps should compile with this specific Haxe version, but there are some edge cases affecting Feathers UI that will require another small update to Haxe (or some renamed variables in Feathers UI).

With the other C compilers working on all platforms, I went back to trying to get cl.exe from Visual Studio to compile. I decided that the first step would be to get the cl.exe command working in Visual Studio’s special developer console, which automatically sets up various environment variables for you. I knew I’d eventually want to be able to build OpenFL apps from any random Windows terminal window, but I could look into that later. One step at a time.

I ended up with a command that looks something like this:

cl.exe /Ox /Fe:bin/MyApp.exe -I obj obj/MyApp.c -L lib libhl.lib fmt.lib mysql.lib sqlite.lib ssl.lib ui.lib uv.lib lime.lib /link /subsystem:windows

When an executable is built on Windows with Visual Studio, and it uses a .dll file (or .hdll, in the case of HashLink libraries), the .dll file is associated with a seperate .lib file. gcc and clang seem to be able to link directly to a HashLink .hdll file, but Visual Studio’s cl.exe wants the .lib file instead. HashLink is distributed with .hdll and .lib files for its core library. Lime wasn’t bundling those .lib files in 8.1 and older, but I made a minor tweak to the Lime build process to copy those next to the .hdll files that we were already bundling (the .lib files were already built, but they were simply not copied because HL/JIT doesn’t need them).

With that working, the next step was to figure out how to set up Visual Studio’s build environment in a random command prompt instead of requiring OpenFL users to open the Visual Studio developer console.

First, I had to figure out where Visual Studio was installed. I learned that Visual Studio comes with an executable called vswhere.exe, that is guaranteed to be available at the following location if Visual Studio is installed, regardless of which version of Visual Studio we’re dealing with:

C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe

Running vswhere.exe with the following options should print the location of Visual Studio:

vswhere.exe -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath

Visual Studio includes a batch file, called vcvarsall.bat, which can be used to set the appropriate environment variables. Using the installation path returned by vswhere.exe, you just need to append VC\Auxiliary\Build\vcvarsall.bat to the end. It takes a few options, but the only required one is the archtecture. Lime uses the x86_64 architecture (sometimes abbreviated to x64) by default on Windows, so I can pass x64 for the architecture option.

vcvarsall.bat x64

Finally, I needed to combine everything together into a single command that Lime’s tools could run. This one was a little trickier than I expected, and it took me some trial and error, but I ended up launching a separate cmd.exe instance to make things work. The command looks something like this (cl.exe options omitted for brevity):

cmd.exe /s /c vcvarsall.bat x64 && cl.exe [options]

Whew! And that was the last C compiler that I wanted to get working! Now, users won’t need to manually run all of the commands above. They’ll just need to run this one, and Lime’s tooling will do all of the heavy lifting:

lime build hlc

Much better!

Where can I find the code?

If you want to try out the new hlc target for OpenFL, you’ll need to check out Lime’s 8.2.0-Dev branch on Github, or you can download both the lime-haxelib artifact from a successful Github Actions Lime 8.2.0-Dev nightly build. If you check out Lime’s source code from Github, you’ll need to manually rebuild the .hdll binary files and Lime’s command line tools, so you may find it easier to go with the pre-built artifacts. Of course, nightly builds are not necessarily ready for release to Haxelib, so use at your own risk in production. You may encounter some bugs, but we’d love any feedback you can give. Thanks, and happy coding! The new hlc target was released to Haxelib as part of Lime 8.2.0 in October 2024.

If you love these devlogs, and my contributions to OpenFL and Lime in general, please consider a monthly donation as a token of appreciation on my GitHub Sponsors page. Thank you!

OpenFL devlog: updateAfterEvent(), FileReference improvements, option to use Terser minifier

I thought I’d share another update about the new features that I’ve been working on recently for the next versions of OpenFL and Lime. In today’s post, I’m going to talk about implementing updateAfterEvent(), which makes it possible to influence OpenFL’s rendering frequency, improvements with downloading and uploading files with openfl.net.FileReference, and adding the Terser minifier to Lime as an option for the html5 target.

If you’re not familiar, OpenFL is an implementation of the APIs available in Adobe Flash Player (and Adobe AIR) using the Haxe programming language. Projects built with Haxe can be cross-compiled to JavaScript, C++, and many other targets — making it available on the web, desktop apps, mobile apps, and more. No browser plugins required, and fully native C++ performance.

Implementing updateAfterEvent()

In Flash, the MouseEvent, TouchEvent, KeyboardEvent, and TimerEvent classes all have a method named updateAfterEvent(). According to the API reference for Adobe AIR, this is what the method does:

MouseEvent.updateAfterEvent()
Instructs Flash Player or Adobe AIR to render after processing of this event completes, if the display list has been modified.

That description may not be clear enough, so let me expand a bit. Calling updateAfterEvent() inside certain event listeners asks Flash to render more often than the requested frame rate would normally allow. Basically, this method allows developers to set the frame rate very low to improve idle performance (12 FPS was common back in the day, and it might have even been the default in the Flash authoring tool, but you could potentially go as low as 1 FPS). However, when the user is interacting with mouse/keyboard/touch, you can call updateAfterEvent() to effectively increase the frame rate temporarily so that user interaction feels very smooth.

Implementing updateAfterEvent() in OpenFL mostly involved some changes to the openfl.display.Stage class. The stage relies on lime.ui.Window to manage the requested frame rate. Every time that the Window is ready to render a new frame, it will dispatch its onRender event. The stage listens for this event, and that kicks off not just rendering all of the display objects, but also the dispatching of its own events, like Event.ENTER_FRAME, Event.FRAME_CONSTRUCTED, and Event.EXIT_FRAME that are used heavily by developers for animations and things.

OpenFL’s handling of mouse/touch/keyboard events also happens within the Stage object, so forcing a render outside of the normal frame rate is relatively convenient to implement since there is no need to communicate with other classes. However, within the Stage class, I did need to refactor a little to separate the rendering code from the code that dispatches of events like ENTER_FRAME because updateAfterEvent() should not cause MovieClip or other display objects to advance their frame playheads. It simply renders them in their existing frame again (if necessary).

I had to add one hack to make this work on native targets, like C++, Neko, and HashLink. On these targets, before Lime dispatches onRender on a window, Lime also calls window.__backend.render(). Then, after the dispatch, it calls window.__backend.contextFlip(). If these backend methods aren’t called, the state of OpenFL updates, but nothing visually changes on screen. These methods should be internal to Lime, but OpenFL currently has to break encapsulation to call them in order to force Lime to render outside of the normal frame rate. Ideally, Lime would have a public API that could force a render. This should be revisited in the future.

If you want to try out updateAfterEvent() with mouse events at a low frame rate, I should mention one thing that you’ll need to do first. OpenFL has an optimization (enabled by default) that throttles MouseEvent.MOUSE_MOVE events so that they don’t get dispatched more than once between two consecutive frames (technically, that’s not always true, but it’s close enough for our purposes). Since you’ll get MOUSE_MOVE only once between frames, updateAfterEvent() doesn’t add a lot of benefit because it can only trigger one extra render, at most. However, the person who implemented this MOUSE_MOVE optimization provided a define that will restore Flash’s original behavior that allowed multiple MOUSE_MOVE events between frames.

Add the following to your OpenFL project.xml file, and you’re good to go:

<haxedef name="openfl_always_dispatch_mouse_events"/>

OpenFL does not throttle TouchEvent or KeyboardEvent in the same way, so calling updateAfterEvent() on one of those classes will work automatically, without a special define.

FileReference improvements

The openfl.net.FileReference class is intended for downloading from and uploading files to a server. It conveniently provides native file system dialogs to choose where to save a downloaded file or to allow a user to select an existing file to upload.

The first improvement that I implemented was ensuring that FileReference could successfully download binary files with its download() method. Internally, FileReference uses a basic openfl.net.URLLoader. By default, the dataFormat property of URLLoader is set to URLLoaderDataFormat.TEXT, and the previous implementation did not change the dataFormat from that default value. If a binary file is loaded as a String instead of a ByteArray, data may be corrupted when attempting to save it to a file. On the other hand, a plain text file can be loaded as a ByteArray without any data loss. The fix to allow downloading binary files, while preserving the ability to download text files too, was as simple as using URLLoaderDataFormat.BINARY instead.

Until now, OpenFL’s implementation of FileReference did not provide an upload() method implementation at all. It simply threw an exception. Constructing the HTTP request to upload a file to a server is kind of complicated, so I understand why this didn’t exist yet, but my client needed it, so I needed to wade into the murky waters and figure out the multipart/form-data content type.

Most HTTP POST requests are submitted using the application/x-www-form-urlencoded content type. Parameters are sent separated by the & character with names and values separated by the = character, and certain characters are URL encoded, like this:

param1=value1&param2=value2&param3=value3%20with%20spaces

To submit a file in an HTTP POST request wouldn’t work well with that format, especially if it were binary instead of text, so that’s why the multipart/form-data content type is needed. It cleanly separates each parameter with more complex boundaries than the & character, and it doesn’t make any modifications to the values.

Normally, URLLoader generates the HTTP protocol request on its own. However, when using multipart/form-data, it’s necessary to do part of it manually. I ended up creating a ByteArray and writing most of the request with multiple calls to writeUTFBytes(), writing the file data with writeBytes(), and passed the resulting ByteArray to the data property of the URLLoader.

If the original URLRequest contains URLVariables, these also need to be taken into consideration. If the request’s method is set to URLRequestMethod.GET, the variables can be appended to the URL’s search query. However, if the method is URLRequestMethod.POST instead (which is most common for file uploads), the variables need to be inserted into the ByteArray that gets generated above.

Optionally using Terser to minify JS

By default Lime uses Closure Compiler from Google to minify JavaScript for the html5 target when using either the -final or -minify command line flags. Closure Compiler often generates smaller minified JavaScript than most other minifiers, so it makes good sense to use Closure Compiler as the default for Lime and OpenFL. However, there are other popular minifiers out there, and for some JavaScript code, sometimes those other minifiers are the better choice, so I thought it would be nice if Lime could provide more options.

In fact, Lime has long had the ability to use YUI Compressor instead of Closure Compiler, by adding the -yui command line flag. YUI Compressor was once one of the leading minifiers for JavaScript, but it eventually stopped being as useful because it stopped being updated with the latest JavaScript syntax, so it may fail when it sees code that it doesn’t understand. In my experience, using -yui with OpenFL is impossible because OpenFL includes some JavaScript libraries that YUI Compressor can’t deal with properly. I’m just mentioning it to show that Lime already supports multiple minifiers, so why not one more?

One of the most popular minifiers in the JavaScript ecosystem today is called Terser. Terser runs on Node.js and is included with Webpack by default, so a lot of JavaScript projects use Terser, even if they don’t realize it. Considering its popularity, in my mind, Terser was an obvious choice to include as an alternative to the default Closure Compiler.

Lime already bundled Node.js binaries to launch an HTTP server when you run the lime test html5 command. However, the included Node.js version had gotten pretty far out of date, so I updated to Node.js v18, which is the current recommended LTS release at this time. Then, I installed Terser and all of its dependencies locally, and I copied the necessary files into Lime’s repository. Inside Lime’s command line tools, I implemented a new -terser command line flag to tell Lime to use Terser instead of Closure Compiler when minifying for the html5 target. To build, run lime build html5 -minify -terser, and you’re good to go (and it also works with test instead of build)

I also added a new -npx command line flag. This tells Lime’s command line tools to use the npx tool from Node.js to run the latest versions of the google-closure-compiler or terser npm modules instead of the ones bundled with Lime. This should allow developers to take advantage of the latest minifier features (or possibly work around any bugs) without waiting for Lime to update its bundled dependencies and release an update.

Where can I find the code?

If you want to try out the new updateAfterEvent() feature, the improved FileReference class, or the Terser minifier, you’ll need to check out Lime’s 8.1.0-Dev branch and OpenFL’s 9.3.0-Dev branch on Github, or you can download both the lime-haxelib artifact from a successful Github Actions Lime 8.1.0-Dev nightly build and the openfl-haxelib artifact from a successful Github Actions OpenFL 9.3.0-Dev nightly build. If you check out Lime’s source code from Github, you’ll need to rebuild the .ndll binary files and Lime tools, so you may find it easier to go with the pre-built artifacts. Of course, nightly builds are not necessarily ready for release to Haxelib, so use at your own risk in production. You may encounter some bugs, but we’d love any feedback you can give. Thanks, and happy coding! Everything mentioned in this post has been officially released to Haxelib in Lime 8.1 and OpenFL 9.3!

If you love these devlogs, and my contributions to OpenFL and Lime in general, please consider a monthly donation as a token of appreciation on my GitHub Sponsors page. Thank you!