02.03.2014 Views

BSP Developer's Guide

BSP Developer's Guide

BSP Developer's Guide

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

VxWorks 5.5<br />

<strong>BSP</strong> Developer’s <strong>Guide</strong><br />

Before proceeding, determine if the device is DMA capable. Is the device capable of<br />

performing read or write accesses directly to memory that is shared with the CPU?<br />

If the answer is no, your driver might not need any of the cache-related facilities of<br />

the cacheLib. Cache issues affect only those devices that can access memory<br />

shared with the CPU.<br />

If the CPU architecture performs buffered writes, then you might need to deal with<br />

WRITE_PIPING even if the device does not include DMA. Device registers that are<br />

memory mapped should not be cached. In most hardware cases, there is a<br />

hardware mechanism provided that keeps I/O addresses from being cached.<br />

However, keep in mind that even a non-DMA type of device can still have issues<br />

related to write pipelining (see Driver Attributes, p.105).<br />

Shared Memory Types<br />

For DMA-type devices, the driver and the device share one or more memory<br />

regions. This shared memory can be of one of the following types:<br />

■<br />

■<br />

Memory that is allocated using the cacheDmaMalloc( ) routine. This memory<br />

is associated with the CACHE_DMA_xxxx macros and is under the control of<br />

the driver and the underlying hardware for cache issues.<br />

Memory that is allocated using the malloc( ) or memalign( ) routine or<br />

declared in the data or bss sections of the module (stack memory must never<br />

be shared with a device). This type of memory is associated with the<br />

CACHE_DRV_xxxx macros and is solely under the control of the driver for<br />

cache issues.<br />

Because you cannot control the positioning of data obtained by these methods,<br />

this type of memory has an inherent problem: the possibility of sharing a cache<br />

line with an adjacent region that does not belong to the driver. This means that<br />

flush and invalidate operations within this region can interfere with the<br />

coherency of the neighbor’s data in the shared cache lines. Therefore, restrict<br />

the use of this type of memory to exclude the first and last cache line in the<br />

region. Because the cache line size varies on different systems, this becomes a<br />

portability issue.<br />

By using memalign( ), it is possible to insure that buffers are cache line aligned<br />

at their starting address. If you need to protect the end of the buffer, you<br />

should increase the size of this request by at least one cache line size. This<br />

insures that the end of the buffer does not share a cache line with any another<br />

buffer.<br />

104

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!