Based on the fact that distributed intelligence is embarassingly parallel, I started work on NeoLogos
in fall 2022. It is a CPU-based neural network trainer with functions for network synchornization and distributed training. Work is ongoing, and I plan to integrate it with NVidia's CUDA peripherals to allow GPU kernal computation to accelerate client-side training.
A language written directly in bytecode, I wrote the VM for CharCrash
in Spring 2021; the language's core design philosophy is that no directive is more than two characters long (besides numeric values). A compact language, CharCrash is designed both as a spiritual successor (and more human-readable) version of esoteric languages like Brainfuck and Whitespace, while providing a Turing-complete language that could (feasibly) be used for activities like Code Golf or some challenging programming. The VM is written entirely in C and is continually being developed to provide additional operations such as network interfacing and file manipulation.
A CPU-focused rendering model, the Sterling Engine
was something I started during 2020 as a way to include the third dimension in Blake O'Hare's Crayon Language
. It utilizes a relatively "simple" vertex projection system, with support for highly configurable cameras and modular vertex and fragment shaders to provide a decent level of flexibility. However, due to the CPU being ill-designed for parallel calculation and graphics acceleration, the project has since languished due to difficulties improving the quality without massively impacting render time.