4

I mean if it possible to safely plug a PCMCIA card into a PC without IOMMU? Such computers are very widespread, every digital TV or receiver has a CI+ slot, which is PCMCIA, and people insert there cards they don't control and which they haven't audited. I wonder if the ones doing it give ultimate control on their hardware to the entity responsible for card firmware. Is it possible to create an universal rooter for TVs and receivers based on a custom PCMCIA card? Or can it be that all the TVs are backdoored by backdoored CI modules?

KOLANICH
  • 912
  • 6
  • 14

1 Answers1

5

The IDE interface itself does not allow DMA, but PCMCIA does.

DMA requires a feature of the PCI bus called the bus master enable bit set by privileged software that recognizes the DMA-capable device and enables it. A hardware driver can enable bus master for any PCI device with DMA capabilities. This is what happens to both the internal IDE hub (in the ICH) and the PCMCIA card. The IDE protocol itself does not support DMA. The main difference is that the ICH is internal and trusted, whereas PCMCIA is external and untrusted. Because of this, connecting a new PCMCIA card is far more risky than hooking up to an untrusted IDE device.

is it possible to prevent them purely with software means without any IOMMU or other special hardware

It is technically possible to prevent this in software, assuming no ugly bugs in implementations of the PCI specifications. However, it would require modifying the driver software to not grant bus master capabilities to that interface when a device is plugged in which would most likely completely prevent the card from functioning. At the very least, it would limit it to the obscenely slow PIO mode for IDE, rather than the much faster UDMA mode that, as the name implies, requires DMA. PIO is slower because it needs to involve the CPU for every data transfer, which is very inefficient. If PIO is not supported and DMA is disabled... well then the PCMCIA card will not function at all.

Is it possible to create an universal rooter for TVs and receivers based on a custom PCMCIA card?

This depends on whether or not the TV or receiver is running an architecture that supports DMA and does not have an IOMMU or any other protections against rogue direct memory access. A TV or receiver is an embedded system and might not behave like the familiar IBM-style x86 PC.

Or can it be that all the TVs are backdoored by backdoored CI modules?

This is not something anyone can know without auditing all the TVs. I would hazard a guess and say no, simply because it would be so easy to detect (and so hard for a single malicious card to support a reliable attack against a large number of presumably heterogeneous TV models).


DMA and PIO are the two ways to do I/O. PIO is significantly slower than DMA.

It might be useful to understand why DMA exists and how it works. When DMA is supported, a dedicated region of memory is set up and the controller (whether the ICH or PCMCIA) directly writes the data read over IDE to that memory region (using the DMA-based UDMA or MDMA protocols). From the CPU's perspective, it issues a command to read data and then can go do its own thing, knowing it will be asynchronously alerted with an interrupt as soon as the process has been finished, with the requested data waiting right there in memory for it. Likewise if it wants to write over IDE, it can simply populate that memory region with the desired data and tell the controller to take it. Just like when reading data, the CPU is free to go off and do its own thing without needing to micromanage the transfers. It will be alerted with an interrupt as soon as the task is completed. This is why transferring data to and from devices (GPUs, NICs, storage devices using IDE or SATA, etc) can be made so fast and require so little CPU to perform.

Taking advantage of DMA to perform such optimizations and get out of the CPU's hair requires DMA actually be supported. After all, how can a device avoid bothering the CPU with every transfer if it's not given raw access to the same memory the CPU has? Normally, the device is simply trusted to only write to that memory region and nowhere else. An IOMMU, if present, can force it to use its dedicated memory region and nothing else (a process called DMA remapping, using DMAR tables hardcoded in the BIOS that specify the memory regions to use for each device). If DMA is not supported, then transfers can only happen over the much slower PIO mode.

There are two forms of PIO (Programmed I/O). They are MMIO (Memory-Mapped I/O) and PMIO (Port-Mapped I/O). They are similar in that they both require active interaction from the CPU. With MMIO, a special address range is set up that looks like memory from the CPU's perspective, but the bits have different functions when they are set, meaning it is not regular memory. With PMIO, data is loaded into a register and an instruction (such as OUT) is called. This instruction writes the data over a certain "port" number that specifies its destination. Some older technologies still use this, such as the PS/2 keyboard. However for large transfers of data, it is very inefficient. It requires the CPU load a small register of only a few tens of bits in size with data to write before issuing an instruction and waiting until the instruction finishes before it is able to do any more work. This has to repeat over and over. It is woefully inefficient, to the point where the maximum speed is only a few MiB/s and maxes out the CPU in the process. It makes the CPU the bottleneck.

forest
  • 65,613
  • 20
  • 208
  • 262