There are a lot of cases where the authenticity of a call to a trust script matters (see usecases below), and there is no way to verify it currently. This proposal puts forward a way to authenticate script output to enable new use-cases.
1) Change every single trust script (and only trust scripts) to accept an optional "signed:true" argument.
2) Generate a public/private key pair for the server from some secure-enough system (I'd use PGP personally), and expose the public key via a new trust script (`Ftrust`.`Lpublic_key`?)
3) Any trust script that sees signed:true operates as follows:
a) It does it's normal routine as usual, and generates a return value `Cret` (the normal return it would make if signed:true was not present).
b) it creates the following object:
`N"context"`:`C<context as seen by that trust script>`,
`N"args"`:`C<args as seen by that trust script>`,
(note: signing/including args allows for especially paranoid callers to provide their own unique nonce argument that the trust script silently ignores, to add extra security against replay attacks. I.e. i can add `N"dtrs_awesome_nonce"`:`V"asdf234asdf2ndadsn3a"` to my script calls and verify that is still in the args I get back; no trust scripts use that, so it's fine)
c) it `CJSON.stringify`-es that object, then signs the resulting string with the private key.
d) it adds the signature to the object it constructed (with key `N"signature"`)
e) it returns that object.
4) Introduce a new trust script (`Ftrust`.`Lverify`?), that, given an object with date/context/args/return/signature keys, verifies that the signature matches the other fields, and returns either true or false (or, with the exposed public key, we can do our own offline validation out of game)
1) Verifying `Fscripts`.`Lquine` output. Right now, quines cannot be believed because it could just be a string the script returns, or the script could edit the result before returning. A signed `Fscripts`.`Lquine` output is unspoofable (since scripts can't be edited, and `Ccontext` includes `Cthis_script` which would be `Fscripts`.`Lquine`; the date verifies it is not ancient signed output being reused). In addition to just letting users check if a script isn't lying, this would enable things loke golf contests -- solve this problem in the shortest script. the test script takes a scriptor, requests a signed quine to verify count, and records output. Right now there is no way to do this that cannot be cheated.
2) Allowing scriptors to be trusted in 'hostile' (subscripted) environments. Because there's a date, context, and args included, banks can verify the result of `Faccts`.`Lxfer_gc_to` scriptors, item records can verify the result of `Fsys`.`Lupgrades` scriptors, and so on, even when subscripted (currently: scriptors cannot be trusted when subscripted)
3) proving old records are accurate. I can save a signed copy of, for example, my upgrade logs today, and in 6 months when those upgrade logs have *long since* been deleted from the server (30 day retention), I can prove that that call was legitimate. This enables long-term record keeping to be provable beyond the current 30 day window which is much too short for many use-cases (long-term loans, etc).
In a game about trust, a strong secure way to prove stuff is maybe not ideal, and we've gotten on just fine without it so far. But, a lot of times we're forced into awkward choices to ensure data is accurate, and that makes scripts less useful. A way to provide clean interfaces to users and still ensure accuracy would open up a lot of use-cases. And just having verifiable quine output would make scripting competitions viable without relying on people not lying.
If the secret key was ever leaked, all previously signed data loses value. This destroys the long-term record keeping step (3, above), but only harms cases 1 and 2 for the duration between when the leak happens and when it is corrected (which ideally would be brief, but in practice maybe not so much)
PGP can be slow. This is why its optional. It's used only when a script knows it needs extra guarantees about the validity of data it is getting, and is ok paying in runtime to get those guarantees. Normal users aren't affected except by the parse load of encryption code (which would always need to be included)