The Unclear Impact

Krist贸f Marussy ferdium馃巰 |

I'm a PhD student working on the extra-functional requirements and formal verification of cyber-physical system architectures.
I also like free (as in liberty) software, privacy enhancing technologies, and cryptography.

I may not be trans but transgender hating script kiddies are too incompetent to tell the difference. Donkey Kong says trans rights = human rights.

welp, the power outage seems to have cooked a switch at uni.

time to whip out the DB-9 cable to see whether it outputs any debug info, or it is gone for good

orange site
craion output for the prompt "serkle nufding": black-and-white photos of bearded man, resembling 19th century portraits

social media

discords for high-profile open-source projects look dystopian

@colin I just tried compiling with because eclipse was crashing too much

it鈥檚 pretty trivial on arch, since the build scripts are available in the repo sources, and I just had to add a patch -p1 call

but actually, the arch build servers already built a new version of webkit (maintainers were extra quick, kudos!) with the patch by the time my machine got to about halfway in the build, so I ended up just installing from the repos anyways blobfoxlaughsweat

time to build a patched webkit2gtk


first comments from my advisor are back for the first complete draft of my thesis blobfoxnomdonutterrified

why does eclipse like to coredump on me in libwebkit2gtk of all places?



Downloading 338.45 MB separate debug info for /usr/lib/

holy moly!

@TinfoilSubmarine @realcaseyrollins how was the pleroma -> akkoma switch? i鈥檓 looking at git log pleroma/develop..akkoma/develop, but can鈥檛 find a straightforward point of divergence. did you manage to upgrade without deleting the database?

friendship ended with persistent hashmaps, now undo/redo logs are my best friend

or possibly not, we still have to run some benchmarks blobfoxscience

@monsoonrains @fuchsiashock note to self: make sure to serve conference pears in order to maximize confusion if i ever have to organize a conference blobfoxsmug


it鈥檚 baffling that even when right-wingers have a moment of awareness and realize that (among a plethora of other things) ubiquitous surveillance and proprietary tethers in technology make us like slaves, they鈥檙e immediately like 鈥渢hus, we should exterminate minorities鈥

it鈥檚 inhumane, completely disconnected from logic, and prevents any action that would serve as a real remedy

fuck fascism.

Re: javascript

@aral my first intuition would be that this always displays 1, because let count = 1 is on the top of the file, and it looks like it always gets executed when the page is rendered blobfoxglare

I鈥檇 expect a clearer separation between the initialization and the rendering, like

let count = 1;

export default function render() {
  return <div>{count++}</div>;

or maybe even (goodness forbid!)

let count = useState(1);

return <div>{count++}</div>;

(btw, won鈥檛 the code display 1 times even on the first request, because the condition count > 1 gets evaluated after the post-increment? granted, I鈥檓 not familiar with the order of side-effects in jsx interpolation expressions)

@kristof 鈥nd then there鈥檚 mine, where you can, but the board manufacturer has its own keys, and if you replace them you can no longer configure the motherboard 馃檭

a general purpose computer where you can鈥檛 replace microsoft鈥檚 root of trust keys with yours isn鈥檛 general purpose

@tindall that鈥檚 not a type of guy, that鈥檚 a Guy(Of type) blobfoxcomputerowo

@juliank @juliank ah, thanks! very interesting

I never realized apt was allowed reshuffle already (automatically) installed packages, but in that case, stability sounds useful indeed blobfoxcomfy

@juliank btw (I am not very familiar with apt, so pardon me if this is nonsense), how large/difficult are these dependency resolution problems that you need a custom decision procedure for them? naively, directly encoding everything into SAT seems possible

(OTOH p2 in Eclipse does just that, and is usually dreadfully slow)

@juliank the parallel exploration on different parts of the graph sounds a bit weird for me at first. in the end, you鈥檇 need to integrate the results from multiple threads, where the nodes could have shifted around in different ways.

a simpler approach could be to explore different parts of the search space in parallel: say, pick an initial branching point, and explore the possible outcomes (selected node in the or-group in your case) in separate threads. however, this may need more memory to keep track of each candidate solution on each threads (but persistent data structures with structural sharing can reduce the overhead considerably)

@juliank we鈥檙e trying to generate large(-ish) graphs which satisfy some first-order constraints (useful as test inputs or design space exploration), so we have the benefit that our graph starts small and grows big as the solver runs 鈥 it seems you鈥檙e having to deal with the opposite blobfoxupsidedown

but I鈥檒l try to keep this use-case in mind 鈥 the very least, now I know that I could make nice graph-processing benchmarks from apt install :)

clause learning seems quite hard in this space, because the nodes of the graph 鈥榮hift鈥 around as you are merging them (we鈥檙e doing the opposite by splitting), so there is no easy way to generalize the results of conflict analysis into a lemma鈥

data structures, long

@juliank at uni, we鈥檙e developing something similar (in java, however), for SAT solving actually

in a previous version of the system, we were basically using journaling. that makes backtracking relatively easy, but jumping across different branches of the derivation tree (e.g., some heuristic tells you to abandon whatever you were doing, try something else, but if that doesn鈥檛 yield a solution quickly, go back to wherever you were in the search space) become inefficient, because you always have to first go back to the common ancestor, and then drill down again

currently, we鈥檙e experimenting with persistent data structures. HAMTs seem to work nicely as a big collection of labeled tuples (~graph edges). we鈥檙e following this paper: but implemented it from scratch with our own optimizations

one particular optimization that seems useful is to have immutable and mutable tree nodes, and 鈥渃heck in鈥 a new revision of a collection by copying each mutable node into a packed immutable one, which can handle batch updates more efficiently (immutable -> mutable when performing an update is just CoW)

of course, a persistent set of tuples is barely a graph, so we also have to maintain indexes for adjacency. the cool thing is that it鈥檚 very fast to determine the delta between two persistent data structures (just iterate over the two trees, skipping over immutable nodes that are referentially identical): when backtracking, you can calculate a delta (usually small), and update (mutable) indexes accordingly. we鈥檙e using to incrementally maintain indexes and graph queries, but would be another alternative

another idea would be to use another persistent data structure, like a ROMDD, radix tree, or even B-tree to store the indexes. but that might lead to pathological behavior on some inputs (that a randomly selected hash function in HAMT can avoid)

w.r.t. doing this in native code, I think you could get away with just releasing all mutable nodes when 鈥渃hecking in鈥 and never releasing immutable ones (since 鈥渃hecking in鈥 can control the number of stored versions, hopefully, there won鈥檛 be extremely many). otherwise, I鈥檇 try adding a refcount to mutable nodes, but I have no idea about its efficiency