MPXE Harness
The heart of MPXE is the harness program. This is the program that will interact with clients over websocket connections, as well as manage running projects.
Web socket Interface
Input:
start project
due to the asynchronous nature of this, once the project is started up, the harness should send a response indicating the project was started.
stop project
query entire data store
set data store state
Output:
State of the data store
To begin with, we can prototype by just always periodically spewing the entire data store via web socket
Ideally, later on we can only send updates
It would be great if we could ‘query’ this, so by sending a web socket ‘request’ message we could trigger a send of the entire data store
CAN messages
send them raw. Any client can decode them on their own.
Components
There are four main components to the harness
Data store: stores the state of every project. Memory is shared between it and each project, so both the harness and the project can read from and write to the same memory.
Web socket interface: sends the state of the data store periodically, and receives commands from clients.
Project manager: manages allocating memory for projects and spinning them up via make command.
Log manager: gathers and parses output from the projects, including CAN messages.
Automated testing
Testing can be done via python script: the script should use the python library we define for interacting with the harness via websockets, and can make assertions to the reactions of the system. An example flow could be:
start pedal board
start MCI
Set throttle to 50%
Assert MCI output is 50%
etc.
Steps for starting a project
Starting up a project would look something like this:
Web socket interface receives command to start up a project
Project manager allocates memory in data store for project
Project manager runs
system("make run PROJECT=blablabla PLATFORM=x86")
and pipes output to log managerProject manager sets up message queue for project to talk to project manager
Each driver gets key to the message queue project manager set up
Log manager taps into virtual CAN bus to collect those messages
Project updates data store and sends a message via message queue
Web socket interface sends web sockets upon data store being updated
Log manager sends CAN messages over web socket upon receiving them
Communication between the harness and projects [SOFT-252]
Idea:
Define protocol buffers for each piece of hardware, i.e. stm32, ads1015, spv1020, etc. (start with just stm32 and one driver)
Generate protobufs using the protobuf-c library for c headers and the regular protoc compiler for python / javascript bindings
Use the C headers directly for communication between the harness and drivers. Note it would then be useful to add a make target that only compiles the protos and puts the headers in the right spots.
Each piece of hardware gets its own ‘store’ within the master data store. Each project shares the memory for that store directly with the harness. These stores should be allocated and mapped by the harness in advance, then upon init each driver should grab the key to that store.
As in SOFT-251, projects are run from the harness via
popen()
.
For websockets, stores can be encoded in protobufs and sent periodically (tick rate can be determined later). CAN messages should be forwarded directly. Harness should accept a query for all data that sends everything at once.
Python library can do something similar to candump where it can listen for incoming messages and log them. Should also be able to have a non-blocking socket listener running that updates a local store that contains the most recent state.
Flask + web client can use the library to handle state etc., just needs to pass that state along to the web client. This can be via json, or we could forward the socket messages from the harness directly to the client for less decoding/encoding steps.
TL;DR: generate headers from protobufs using nanopb. Mmap stores from the harness and have the projects access those stores. The harness taps into the can network. There shouldn’t be a need for messages between the harness and the projects.
MCP2515 design problem - different uses of the driver in different projects
The scripts emulate the hardware pieces (i.e. elcon charger or wavesculptor motor contorller)
Scripts attach callbacks to shared memory
e.g. can_tx() and can_rx()
scripts should decode these messages and respond accordingly
MCP2515 calls those callbacks and registers an rx() callback to be called by the script
Block Diagram
Of course this is a WIP
Components:
CAN handler
receives messages to send from the message handler in main, stores them in a TX queue, and sends them asap
Reads messages off the CAN bus and sends them to the message handler in main.
Project Manager
takes commands to power on and off boards
sends logs from projects to the log parser in main
each project runs in its own process
project manager holds a handle to each running project and gathers the logs accordingly
Project manager should also handle running drivers for the motor controller and charger, etc. Basically the logic for any hardware interactions we have should be run in project manager
Data store
stores the shared data for the projects
handles mapping / unmapping of shared memory depending on what project is spun up (should expose an API to be called by project manager? for whenever a new project spins up)
sends updates to the update handler in main
Websockets
receives commands over websockets and passes them to the message handler in main to be processed accordingly
sends messages passed to it by the message handler over websocket to expose to clients (e.g. CAN messages, datastore updates, project logs)
Questions:
do we need a queue for RXing CAN messages?
should modules call main, or should main grab messages from queues stored in the modules?
Notes from July 1st, Max and Jess
worked out structure of ring buffer (to use for queues)
Yes, we need a queue for RXing CAN messages. CAN and Websocket will both expose TX and RX queues. The module will act as the consumer for the TX queue and the producer for the RX queue. Main will act as the consumer for the RX queues and the producer for the TX queues.
Main will be implemented as an infinite loop that constantly checks the RX queues. Each queue will expose a read-only counter for how many items there are that main can check for values at. This doesn’t have to happen atomically since the number will never decrement after main checks it because main is the only consumer.