The Unclear Impact

Kristóf Marussy ferdi | @kristof@pleroma.marussy.com

I'm a PhD student working on the extra-functional requirements and formal verification of cyber-physical system architectures.
I also like free (as in liberty) software, privacy enhancing technologies, and cryptography.

re: nvidia

@arielcostas The problem is that other hardware companies aren’t that much better. The minimum should be disclosing schematics and firmware source along with the hardware for interfacing and repairability purposes (like for the microcomputers of yore). But somewhere down the line we’ve lost the possibility for people to fully understand the equipment that they use and ended up with treacherous computing instead.

But indeed: Nvidia, fuck you

re: nvidia

@arielcostas Not buying nvidia again would be the solution. They use DRM on the GPU firmware to prevent the open-source driver from ever accessing the registers needed for GPU power management and proper performance.

Ostensibly, this is for stopping card counterfeiting, but I’d imagine it’s also awfully convenient for preventing the user from circumventing HDCP easily. Until recently, the proprietary driver also prevented you from using the GPU in a virtual machine. Plus, they introduced a firmware-based lockout to prevent cryptocurrency mining on some GPUs, and I’d imagine it would also be easy to circumvent that with an open driver. – though people have found a way to do that even with the closed driver, so the huge secrecy is all for nothing anyways.

(Not that AMD is much better, since they too rely on firmware blob, but at least the driver that loads it plays nicely with the Linux infrastructure for now.)

@meeper if you mean remapping as in sending a different event on a key press, then probably yes. I did it like this: https://git.marussy.com/keyboard_remap/tree/keyboard_remap.py

There’s also https://gitlab.com/interception/linux/tools , but I’ve never figured out how to use that properly.

@aral @aral @jens If the computer is powered on, and your treat model includes physical attacks that tamper with the memory (e.g., a DMA attack via a TB port), even erasing your key won’t save you from key exfiltration. The attacker could just modify the code in your memory that unlocks the disk to send your password somewhere as soon as you enter it.

Also, clearing the key could be annoying if done on locking the computer, because background tasks won’t be able to run (or at least have to cache everything they need for running as plain text in the memory).

So, basically, clearing the key is only useful when you suspend your machine, and expect to notice any tampering to your computer before you enter your password upon resume. Which is still pretty nice to have as a defense-in-depth mechanism, but not extremely crucial.

I’d rate having secureboot enabled with keys you control (and not mircosoft) and TPM-based integrity checking (again, with your own keys) for your bootloader (but still entering a password) a lot more important, to avoid tampering with the password entry code. Instead of suspending the machine, you can just hibernate it, and it’ll ask for a password upon resuming. The protection offered when the machine is on is still nonzero: assuming you have a lock screen enabled with a strong login password, the only way to access your disk is to tamped with the memory with DMA or other physical access.

That being said, if systemd-homed was available when I set up my machine, I’d have probably used it in conjunction with encrypting the whole system at boot to have some protection when the machine suspended but not hibernated. But I haven’t bothered setting it up yet, since I use suspend too rarely (due to battery drain) to make a difference.

re: apple, privacy, csam mention

@witchy Thank you! Your comments really made me think about my underlying assumptions

Is it really a line that’s just now being crossed,

I think it is, in the sense that surveillance is being moved from common infrastructure to a device ostensibly owned (and trusted) by the user. A secret warrant still makes NSA and the like attack from the outside, while the proposed scanning plants a police deputy with basically no possibility of oversight right into a very personal piece of equipment by default (not even a secret warrant is required).

Of course, the ability to defend against an outside attack by secret warrant is still a source of inequity. But this is why we need projects that support people (with priority to whistleblowers, journalists, and others who keeps systems accountable) in terms of opsec and surveillance self-defense.

To be perfectly clear, I have zero qualms about scanning files uploaded to a cloud for CSAM on the cloud provider’s infrastructure, nor with deliberately attempting to plant malware (after a court order) on the phones of suspected abusers. Let all that we have be thrown at them.

I recognize that this is naive viewpoint, since it mostly relies on good people having good opsec while bad people having bad opsec. But it at least gives a possibility of there remaining good people.

And is your issue really with the ability to extend this to less worthy causes and more authoritarian (re: csam to tankman) or is it with the principle of “forced self-incrimination”?

I think the reason that “forced self-incrimination” is forbidden in democratic countries is that it is part of the way how we protect ourselves against the rise of authoritarianism.

While forcing someone to self-incriminate would be very useful indeed in cases where we can universally agree that what they’ve done is heinous, how could we trust any single organization or government to have that power?

I think I’d be mostly okay with a system where multiple independent organizations (from different countries) would publicly certify that the hashes only contain CSAM. Then the possibility of extending this system for authoritarian purposes would be significantly reduced (but I’m still unsure whether that sets some kind of dangerous precedent anyways). It would only force abusers to self-incriminate, while remaining as impartial in other cases as possible.

re: apple, privacy, csam mention

@edsu Yeah, that’s also an interesting question. [first tech rep., p. 5] has a bit of info about the self-supervised training process, which sounds like a quite clever (but likely not novel – I’m not really a NN person, but I found, e.g., [Weng, 2019] for a survey of similar techniques) idea.

Deliberately engineering the CNN to generate the same hash for a CSAM image and a benign image would be possible, I guess. But there’s no need to do so if the hashes are obscured from the user. Plus, as far as I undersand, someone could extract the CNN from iOS and do some tests on it.

academia

Dissertation writing sucks all the motivation out of me. blobfoxcomfyterrified

Luckily, I have a bunch of papers to review, so I can occupy myself with something supposedly useful for others.

academia

Guess I became the reviewer who actually executes all the evaluations to check the results in the paper. It took running benchmarks for 3 days to let me notice a single typo in a table. Was it useful? Probably not. blobcatshrug

cleaning electronics, alc mention, tutorial?, long

My MX Master 3 mouse’s rubber shell was peeling off, which caused a funny feeling when handled & little pieces of rubber going everywhere. Not wanting to just throw away the mouse, I decided to clean it up by removing some of the offending rubber. This is not a coating, so it cannot be removed entirely. The whole shell is one piece of rubber, which separated into layers due to body heat and sweat. I hope this is just a design mistake, and Logitech deliberately select such material that is being touched for the better part of every workday.

  1. You don’t need to remove the shell of the mouse, because it is reasonable watertight. But avoid getting any liquids inside through the button holes. (The shell can be removed with a torx screwdriver and some insistence.)
  2. I couldn’t find any pure alcohol at home, so I used a cleaning solution that is mostly denatured alcohol (ethanol with isopropyl, I guess) and some glycerin. I don’t know how much the glycerin helped or hurt.
  3. To stop the outer layer from peeling, soak a q-tip in the alcohol and move it outside the peeling bit to thin the top layer out. Once it’s thin enough, it will stick to the bottom layer instead of peeling up. Using the q-tip inside on the border of the damaged area will cause the top layer to peel even more, which could be useful to remove ugly parts of it.
  4. For especially stubborn overhangs, a dry q-tip can help abrading them away, but will damage the underlying rubber.
  5. Soak a cotton ball in alcohol and rub it vigorously over the (now thinned) edge to ‘fuse’ it to the underlying layer. Looks like this is part mechanical (the rubbing thins the upper layer even more) and part chemical (the alcohol acts as a solvent for the rubber and the layers can stick together). This step is a bit hard on the fingertips, so a glove (that is resistant to the alcohol) could be useful.
  6. Clean the site with a cotton ball soaked in water to wash away the alcohol residue, then dry everything with a cloth.
  7. Just like I usually clean it, I cleaned the mouse with a damp microfibre cloth in the end.

Sorry, no before / during pics. I was improvising throughout, so there’s probably a better way to do this with less damage to the shell.

I guess the exposed lower layer will eventually start peeling again, but the process could be repeated a few times until the shell becomes too thin. This should make the mouse more pleasant to use throughout its lifetime.

Logitech MX Master 3 mouse (black) with scratches but no peeling rubber on the back. The area where the back contacts with the palm was abraded away.

apple, privacy, csam mention, long

Everyone should read the CSAM Detection technical report by apple, which explains some details of the technique they are going to deploy. The technical details are also available in another report.

  1. The protocol is technically sound. Indeed, no private information will leak to Apple from the iPhone unless the protocol detects that more images match the hash database pdata than some preset threshold.
  2. pdata, which contains NeuralHash values for contraband images (basically quantized activation values for a neural network trained distinguish between similar and different images) is sent in an encrypted from to the server so that the users can’t learn what images are considered contraband. This is partly a good thing, because otherwise one could compute adversarial preimages of the NeuralHash. Such images, while legal, could trigger an alert. Nevertheless, “the protocol need not prevent a malicious client from causing the server to obtain an incorrect ftPSI-AD output” and “a malicious client that attempts to cause an overcount of the intersection will be detected by mechanisms outside of the cryptographic protocol” [p. 6 in the second technical report], so it’s still possible to trigger a false alarm by running malicious code on the phone. I imagine the “mechanisms outside of the cryptographic protocol” would be human review.
  3. However, the encrypted pdata means that the users can’t know what kind of images are considered contraband. “The Apple PSI system addresses this issue differently, by using measures implemented outside of the cryptographic protocol.” [ibid., p. 13] It could be CSAM, it could be Tank Man, it could be Collateral Murder, and it might as well be country-specific to appease specific regimes. In fact, the protocol is designed so that the user can’t learn which images triggered a notification of authorities.
  4. The only way to make this palatable would be to get many independent organizations (including governments and NGOs) certify that pdata indeed contains no other hashes than CSAM: “one could mitigate tampering with the set X by relying on a third party, who knows both pdata and X, to certify that pdata is constructed correctly for X“. Apple explicitly rejects this approach and “addresses this issue differently” [ibid., p. 13]. Even then, the concerns about false positives, malware, and the impossibility of transparency would remain.
  5. Apple already has the capability to scan photos uploaded to iCloud, since Apple holds the key to decrypt iCloud backups. So, while they claim only the photos uploaded to iCloud are scanned on-device, their approach makes very little sense unless they want to eventually extend it to scanning all files.

However, none of these technical details matter much. If we accept the position of @aral that technology makes us cyborgs and digital tools are extensions of our mind, leveraging the users’ devices to prevent their criminal activity is akin to giving everybody a Clockwork Orange-style brainwashing to deter them from breaking the law. In less bombastic terms, it fundamentally violates established principles against forced self-incrimination. This is a line that we shouldn’t cross, no matter how impeccably we implement the crossing.

If you have a github account, you can join Ed Snowden, myself and what Apple today called the "screeching voices of the minority" of objectors in co-signing the first letter uniting security & privacy experts, researchers, professors, policy advocates, and consumers against 's planned moves against all of our : https://appleprivacyletter.com/

reminder to support the awesome musicians you listen to

It’s Bandcamp Friday! ablobfoxbongohyper

: we don't build back doors. Because .

Also Apple: we're building a back door to conduct mass . It's for the children.

@seachaint @tindall Looks like GANs can do adversarial preimage attacks on perceptual hashes, so evidence for false positives would be relatively easy to exhibit. If there’s no review, that means a near limitless supply of completely legal images that nevertheless prompt SWATting.

That’s extremely scary. But, thinking wishfully, I’d expect reasonable legislators to crack down once faced with such evidence sent to their iPhone.

re: nodejs, electron

@jookia best part is, it’s probably some regex DoS vuln in some development tools that are never exposed to user input in prod, but npm will still insist on a high severity error – alert fatigue makes the heart grow fonder, I guess

looks like whatever kind of rubber is the shell of the MX Master 3 is made of, it can be almost welded by slightly dissolving it in alcohol. neat, that should help counteracting the peeling

is nebula (the streaming service this time, not the overlay network) actually a co-op? youtubers advertise it as “my streaming service” as if they were worker-owners, but other sources say that 50% of the profits go to standard.tv, who 1. don’t seem to be worker-owners at all 2. look a bit shady.

@fedops @robby sshfs is yet another different thing (and another awesome use-case for ssh!)

I’m more interested about having some authenticated tunnel for a bunch of streams (tcp connection, named piped, etc.). which is served mostly nicely by either ssh port forwarding or just piping some command’s stdin/stdout over ssh, but could probably still be improved (e.g., by making it tolerant agains network failure à la mosh)

@robby Yeah, SSH’s pretty cool, but it sometimes feels like it would be better to separate the authentication/tunneling/communication channels (it’s awkward that you can forward ports, but not file descriptors/pipes, so any piping for btrfs send and the like has to be done over the standard io or maybe netcat) functionality from the terminal functionality. I guess mosh does the terminal thing pretty well (piggybacking on ssh for authentication and a pipe to receive connection info), but there’s no tool that just wants to be an authenticated pipe thing. Overlay networks like nebula are similar, but they generally need root access to set up a network, and also only work with ports. (Although that’s probably just me moaning about my ridiculously specific use-case for a user-mode overlay network.)

academia, reviewing

just wrote a 12k character review about the reproducibility of an artifact. granted, 7k of that is the diff -u between the output claimed by the authors and the actual output on my system, but it might still be a bit excessive (on the other hand, I hope that correcting the discrepancies will improve the associated paper)

»