From e5d685b84c17acab7e5678c0309efdedf5cb0425 Mon Sep 17 00:00:00 2001 From: PhatPhuckDave Date: Fri, 26 Sep 2025 09:36:26 +0200 Subject: [PATCH] Update spec with client side processing --- Spec.md | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/Spec.md b/Spec.md index 7f33606..48b3361 100644 --- a/Spec.md +++ b/Spec.md @@ -26,6 +26,9 @@ type Event struct { Events are divided into 3 types, create update and delete events Create events simply create the object as given in Data Delete events simply mark an object as deleted (not actually delete!) via its ItemID +This simply means we set the DeletedAt field of the object to the current timestamp +Updates to deleted events are processed as usual, we have no concept of a "deleted" item, only an item with a field set to a value +Which is then filtered against when fetching and it so happens to be named "DeletedAt" Update events are to modify a field of a row and never more than one field Therefore its data is only the diff in the form of "age = 3" @@ -39,11 +42,45 @@ Assign the event a sequence number that is incremented from the latest Create its EventID (generate a uuid-v4) Assign it a Timestamp Compute the hash from the dump of the current event PLUS the previous event's hash +When serializing the event write the serialization function manually to ensure field order +Do not use json serialize or %+v but manually string together the fields And only then apply the patch For create events that is insert objects For delete events that is mark objects as deleted For update events get the object, apply the diff and sav the object +Events are to be periodically merged on the server +Maybe set this cutoff to 2 or 3 days +This means resolve all the events, delete the event log and generate an event log only having create events with the data we resolved +Hopefully we will never have more than a few hundred of events +Do NOT reset the seq number at any point, always increment from last + +Maybe instead of deleting the event log save it somewhere just to have a backup +Maybe cram them into a text file and save with timestamp +Maybe don't delete/merge the whole event log but only "old" events like >2d +While keeping the "new" events (<2d) + + +On the client side we have to be able to apply patches and fetch objects +The client is to keep a sequence number and hash of the last event it has processed +When starting up ask the server for any new events since its last sequence number +Get any new events and apply them to the local state +When modifying objects generate events and append them to our local event log +Periodically or when possible try to send those events to the server +This means we have to keep saved the event log locally +When the event log is merged on the server our local will diverge +We will only know this by comparing the client hash and seq with the server hash and seq +For example the client may have seq 127 and hash "abcd123" while the server, after merging, has seq 127 and hash "efgh456" +Since on the server the seq 127 will have no previous event (merged - deleted) +While on the client it will have some event +At that point the server is to send the whole event log again and the client is to reconstruct it again + +IF the server merged the event log and our client has events that have not yet been sent +Then get the new events from server, apply them, and apply our local events on top of those +And try to send them to server again +On the server side if a client sends us events after we merged the event log +We may simply naturally apply them even if the client was not operating on the merged event log +At the end of the day merging the event log should make no changes to the data ---