Published 2019-12-12T03:22:00.002Z by Physics Derivation Graph
Currently I have a Docker container that runs flask on Ubuntu to present a web interface that uses forms to enter information. sandbox/docker_images/flask_ubuntu/web
A Python script on the backend handles conversion from string LaTeX to PNG using dvipng, with graphviz generating the static graph PNG.
The other major component of the backend is an sqlite3 database that holds the data when the container is offline. I don't have experience with SQL, so I need a plan to get to the minimum viable product.
The purpose of the sqlite3 file is to store the multiple tables offline. I could use a Python Pickle file, but that would be specific to Python; the sqlite approach seems more portable and generic.
The only actions I need are
write data structure from memory (in Python) to sqlite
read data structure from sqlite into Python
Summary:
SQL tables <--> Python data structures <--> graph structure <--> graph viz, website generation, UI web
On startup, read data into Python from sqlite.
After that, every time there is a change to the structure in Python, write to sqlite.
This approach is not elegant compared to "write only diff" or "write at end of session" but it eliminates any possibility of inconsistency.
This approach doesn't scale for large databases or multiple users, but those aren't problems I need to solve right now (I'm intentionally incurring technical debt).
enforce column consistency (each row has N columns)
enforce column types (e.g., string, integer)
enforce entry length (e.g., local ID must be an integer with M digits)
SQLite options
From the perspective of file management, having one file feels cleaner than a file per derivation.
5 tables in 1 SQLite file
One option is to implement 5 table schemas:
expression latex to expression ID. Columns:
expression_latex (string)
expression ID (integer)
inference rule to latex, description, CAS representation
inference rule (string)
inference rule latex (string)
inference rule description (string)
CAS representation (string)
derivation edges. Columns:
derivation name (string)
from local ID (integer)
to local ID (integer)
derivation feeds. Columns:
derivation name (string)
latex (string)
local ID (integer)
derivation expressions. Columns:
derivation name (string)
expression ID (integer)
local ID (integer)
derivation inference rules. Columns:
derivation name (string)
inference rule (string)
local ID (integer)
I suspect this layout of tables is suboptimal -- having the "derivation name" repeating in a column is an indicator that the table count should be 2+3*D to eliminate duplication (rather than 5). This 2+3*D (where "D" is the number of derivations) design is also apparent in the "dict of derivations" structure described below. My motive for using 5 is that if I use 2+3*D, the table names are not static.
2+3*D tables in 1 SQLite file
Two tables are independent of derivations:
expression latex to expression ID. Columns:
expression_latex (string)
expression ID (integer)
inference rule to latex, description, CAS representation
inference rule (string)
inference rule latex (string)
inference rule description (string)
CAS representation (string)
And 3 tables are needed per derivation. Problem with this is that the name of the tables isn't known in advance.
2 tables in 1 SQLite file; 3 tables in D SQLite files
Same as previous option, except instead of a single SQLite file, the derivations are in separate files.
SQLite to Python
These tables in SQL are equivalently stored in Python as three data structures:
list of inference rules = [{'inf rule':'inf rule 1','in':1, 'out': 0},{'inf rule':'inf rule 2', 'in':2, 'out': 3}]
list of expressions = [{'expr 1':59285924, 'expr 2': 954849, 'expr 3': 948299}]
dict of derivations = {'derivation name 1':[<step1>, <step2>, <step3>]}
where each <step> has the structure
{'inf rule': 'this inf rule',
input: [{'expr local ID': 942, 'expr ID': 59285924}],
output: [{'expr local ID': 218, 'expr ID': 954849}]}