07-29-2014, 02:30 AM
(07-29-2014, 01:49 AM)tyler569 Wrote: Am I misunderstanding or is there a better way to ensure memory access security without segmentation?
Most RISC processors (such as ARM) use only multi level paging instead of paging and segmentation. Segmentation doesn't provide any more protection (and infact its arguable less protectable due to segments being bigger than pages). The way x86 does it there needs to be 4 look ups to main memory to access one piece of data. This is because it needs to:
1: Use upper bits of virtual address to look up global/local descriptor table.
2: Add base to lower bits of virtual address and the data from the descriptor table to look up a linear address table with protection bits checked and bounds checking
3: Use the upper bits of that linear address table to look up a page directory (first page table)
4: Use the data from that page directory added to the middle bits of the linear address table data to look up a page table (second page table)
5: Use the data from that page table shifted over then added to the lower bits of the linear address table data to get a final real address to RAM.
On the processor this takes 4 loads from memory.
A RISC processor on the other hand would do something along the lines of this(using my CPU as an example):
1: Use upper bits of virtual address to look up a page directory(first page table) which has dynamic size (can be paged to HDD) and has protection bits such as read only, never execute, valid, reference, I/O
2: Use data from that page directory added to the middle bits of the virtual address to look up a page table which also has the same protection bits as above(second page table)
3: Use data from that page table as upper bits in the real address and the lower bits of the virtual address are the lower bits of the real address.
This only takes 2 loads from RAM and yet has about the same size and has better protection mechanisms than x86 because I can mark an entire page table as I/O instead of having to write to each page table entry and mark them as I/O which is slower. Secondly its extremely easy to set up shared pages (know as bidirectional queues or "pipes") to form inter process communication and require less system calls to the kernel which also speeds up the system.