Dev Blog: Flatland Design Patterns
07/15/2025
By Trevor
Flatland Design Patterns
- Using the Command pattern to create and understand powerful shape tools *
Command
The core of Flatland functionality is enabled by an implementation of the Command pattern. I learned about this after seeing whatshisface talk at Handmade Conf and then watching an earlier talk on data oriented programming. The advice that led me here was something like "if you are struggling to detangle the logic of an operation, consider the command pattern". The pattern is widely discussed online and although [he] wrote a chapter on it in Game Programming Patterns it's not a perfect reference and my implementation takes only the broad strokes.
The Command Pattern
All changes to state are packaged into Commands, added to a queue, and then processed in the order they were received. Commands all implement do() and undo() functions which effect and revert their state changes. The data involved in a command is passed to its constructor, meaning each command carries everything the program needs to know to execute and revert the command within its object. The queue executes commands whenever it is asked, currently on a tick managed by the rendering loop. Although this may change, it does ensure that things look exactly how they are in state and as the developer it is immediately clear when performance issues arise.
For the user, command enables undo/redo functionality, which is important because creating perfect shapes is a process of experimentation, evaluation and revision even with the most precise tools. Undo/redo is an essential part of a drawing app and is expected by all users.
The structure provided by Commands usually makes coding their functionality straightforward. Large operations naturally split into several ordered commands based on the data that needs to be changed.
For example, if a user closes a path by connecting the first point to the last point, a number of state changes must be made: the to and from properties of the first and last points must be set to the correct coordinates to allow the Bezier curve to be drawn, a best guess at the intended handles of the last point must be set for the curve segment, the shape must be added to the list of complete geometries, a corresponding Piece must be filed with the React UI store, and the tool should swap to the Select tool and finally the new Piece should be selected to indicate to the user that their shape closure was successful.
Thus, path closure breaks down well into a specialized ClosePathCommand, an AddPieceCommand, and a ChangeToolCommand which are much easier to reason about than a large procedure with async components (communicating with the Zustand store). Furthermore, undo is free: to undo, ChangeTool reapplies the previous tool and tool state, AddPiece calls RemovePiece on the same id string it was initialized with (implemented by the Zustand store), and ClosePath sets an entry in the geometry map to the geometry state it stored on construction, which is just an array of a few vectors and flags.
Vector- and matrix-heavy commands are the least natural operations as they often rely on mutating large amounts of data instead of initializing a set of vectors for performance reasons. At its worst, a set of delta vectors is stored in the command and applied or inverse applied to the relevant vectors directly to allow undo, such as in Move commands for multiple selections. But for most functionality, the undo is trivial, and for every command, the state should be the same before and after a do-undo sequence–coverable with a simple unit test.
Because of the widespread use of vectors, state is not atomic, but even the most involved operations are easy to reason about and easy to debug by watching how commands are submitted and processed within the queue. Plus, undo!
Tools
It would also be sensible to have some structure around how commands are created. Not just anything can submit a command to the Command Queue, one must use a Tool! Requiring a Tool helps reduce side effects and enforces a perfect event-driven architecture.
Hahaha just kidding, there are Generic commands to allow for things like saving and loading projects, looking for imports from other projects, and rebuilding offscreen canvas buffers which are invoked freely, with no undo functionality, and heavy side effects (like applying another Queue history entirely). But mostly the above is true: Tools create Commands.
Tools are useful for implementation because they provide a surface to map event handlers onto and contextual state for use in the construction of commands (what will be the geometryId of the in-progress shape? The Path Tool always knows!) as well as the logic involved to determine which command to dispatch. For example, if the user clicks on the canvas with the Select Tool, the Select Tool uses its state to look for points the user may be selecting and can access pointer position, keypresses, and geometry data to issue a SelectPoint command.
Naively, when prototyping the first drawing interactions, I created a priority stack of operations which the tool would check for sequentially on click. For example, check selecting? moving? drawing? and so on down the stack of possible action outcomes. This is immediately unmaintainable. With Tools, all events are captured by a set of handlers provided by the active tool which activates behaviors depending on the current state of the tool.
One way to understand how complete a tool is would be to create a chart of the combinations of conditions that are possible for each input event, i.e. a set of sets which each contain all possible combinations of conditions in a tool's state. For simple tools like Select this set of sets is small, less than one hundred possible configuration combinations which mostly result in a few commands. But for more powerful tools like the Path Tool, we quickly find thousands of possible configuration combinations. Fortunately, not all combinations lead to distinct Commands–mostly the Path Tool ends up adding points, removing points, and changing to sibling tools like the Bezier Point Editor (more on those for another post). Nevertheless, combinations with no resulting command indicate an incomplete tool and finding those combinations helps me develop a more complete drawing experience. Very few interactions should Do Nothing.
The combination of conditions that dictate the outcome of a user interaction lends itself to a nice chart of all possible inputs, and allows me to create a coverage map for each tool with paths charted for unique interactions. This is a much better way of mapping user interaction and deciding how complicated tools, such as the path tool, should behave.