#[repr(align(128))]pub struct CacheAlign<T> { /* private fields */ }
Expand description
Aligns and pads a value to the length of a cache line.
In concurrent programming, sometimes it is desirable to make sure commonly accessed pieces of
data are not placed into the same cache line. Updating an atomic value invalidates the whole
cache line it belongs to, which makes the next access to the same cache line slower for other
CPU cores. Use CacheAlign
to ensure updating one piece of data doesn’t invalidate other
cached data.
§Size and alignment
Cache lines are assumed to be N bytes long, depending on the architecture:
- On x86-64, aarch64, and powerpc64, N = 128.
- On arm, mips, mips64, sparc, and hexagon, N = 32.
- On m68k, N = 16.
- On s390x, N = 256.
- On all others, N = 64.
Note that N is just a reasonable guess and is not guaranteed to match the actual cache line length of the machine the program is running on. On modern Intel architectures, spatial prefetcher is pulling pairs of 64-byte cache lines at a time, so we pessimistically assume that cache lines are 128 bytes long.
The size of CacheAlign<T>
is the smallest multiple of N bytes large enough to accommodate
a value of type T
.
The alignment of CacheAlign<T>
is the maximum of N bytes and the alignment of T
.
§Examples
Alignment and padding:
let array = [CacheAlign::new(1i8), CacheAlign::new(2i8)];
let addr1 = &*array[0] as *const i8 as usize;
let addr2 = &*array[1] as *const i8 as usize;
assert!(addr2 - addr1 >= 32);
assert_eq!(addr1 % 32, 0);
assert_eq!(addr2 % 32, 0);
When building a concurrent queue with a head and a tail index, it is wise to place them in different cache lines so that concurrent threads pushing and popping elements don’t invalidate each other’s cache lines:
struct Queue<T> {
head: CacheAlign<AtomicUsize>,
tail: CacheAlign<AtomicUsize>,
buffer: *mut T,
}
Implementations§
Source§impl<T> CacheAlign<T>
impl<T> CacheAlign<T>
Sourcepub const fn new(t: T) -> CacheAlign<T>
pub const fn new(t: T) -> CacheAlign<T>
Pads and aligns a value to the length of a cache line.
§Examples
let padded_value = CacheAlign::new(1);
Sourcepub fn into_inner(self) -> T
pub fn into_inner(self) -> T
Returns the inner value.
§Examples
let padded_value = CacheAlign::new(7);
let value = padded_value.into_inner();
assert_eq!(value, 7);
Sourcepub const fn into_inner_copy(self) -> Twhere
Self: Copy,
pub const fn into_inner_copy(self) -> Twhere
Self: Copy,
Returns the copied inner value in compile-time.
Trait Implementations§
Source§impl<T: Clone> Clone for CacheAlign<T>
impl<T: Clone> Clone for CacheAlign<T>
Source§fn clone(&self) -> CacheAlign<T>
fn clone(&self) -> CacheAlign<T>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl<T: Debug> Debug for CacheAlign<T>
impl<T: Debug> Debug for CacheAlign<T>
Source§impl<T: Default> Default for CacheAlign<T>
impl<T: Default> Default for CacheAlign<T>
Source§fn default() -> CacheAlign<T>
fn default() -> CacheAlign<T>
Source§impl<T> Deref for CacheAlign<T>
impl<T> Deref for CacheAlign<T>
Source§impl<T> DerefMut for CacheAlign<T>
impl<T> DerefMut for CacheAlign<T>
Source§impl<T: Display> Display for CacheAlign<T>
impl<T: Display> Display for CacheAlign<T>
Source§impl<T> From<T> for CacheAlign<T>
impl<T> From<T> for CacheAlign<T>
Source§impl<T: Hash> Hash for CacheAlign<T>
impl<T: Hash> Hash for CacheAlign<T>
Source§impl<T: PartialEq> PartialEq for CacheAlign<T>
impl<T: PartialEq> PartialEq for CacheAlign<T>
impl<T: Copy> Copy for CacheAlign<T>
impl<T: Eq> Eq for CacheAlign<T>
impl<T: Send> Send for CacheAlign<T>
unsafe_sync
only.impl<T> StructuralPartialEq for CacheAlign<T>
impl<T: Sync> Sync for CacheAlign<T>
unsafe_sync
only.Auto Trait Implementations§
impl<T> Freeze for CacheAlign<T>where
T: Freeze,
impl<T> RefUnwindSafe for CacheAlign<T>where
T: RefUnwindSafe,
impl<T> Unpin for CacheAlign<T>where
T: Unpin,
impl<T> UnwindSafe for CacheAlign<T>where
T: UnwindSafe,
Blanket Implementations§
§impl<T> ArchivePointee for T
impl<T> ArchivePointee for T
§type ArchivedMetadata = ()
type ArchivedMetadata = ()
§fn pointer_metadata(
_: &<T as ArchivePointee>::ArchivedMetadata,
) -> <T as Pointee>::Metadata
fn pointer_metadata( _: &<T as ArchivePointee>::ArchivedMetadata, ) -> <T as Pointee>::Metadata
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> ByteSized for T
impl<T> ByteSized for T
Source§const BYTE_ALIGN: usize = _
const BYTE_ALIGN: usize = _
Source§fn byte_align(&self) -> usize ⓘ
fn byte_align(&self) -> usize ⓘ
Source§fn ptr_size_ratio(&self) -> [usize; 2]
fn ptr_size_ratio(&self) -> [usize; 2]
Source§impl<T, R> Chain<R> for Twhere
T: ?Sized,
impl<T, R> Chain<R> for Twhere
T: ?Sized,
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.Source§impl<T> ExtAny for T
impl<T> ExtAny for T
Source§fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
Source§impl<T> ExtMem for Twhere
T: ?Sized,
impl<T> ExtMem for Twhere
T: ?Sized,
Source§const NEEDS_DROP: bool = _
const NEEDS_DROP: bool = _
Source§fn mem_align_of_val(&self) -> usize ⓘ
fn mem_align_of_val(&self) -> usize ⓘ
Source§fn mem_size_of_val(&self) -> usize ⓘ
fn mem_size_of_val(&self) -> usize ⓘ
Source§fn mem_needs_drop(&self) -> bool
fn mem_needs_drop(&self) -> bool
true
if dropping values of this type matters. Read moreSource§fn mem_forget(self)where
Self: Sized,
fn mem_forget(self)where
Self: Sized,
self
without running its destructor. Read moreSource§fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
Source§unsafe fn mem_zeroed<T>() -> T
unsafe fn mem_zeroed<T>() -> T
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§fn mem_as_bytes(&self) -> &[u8] ⓘ
fn mem_as_bytes(&self) -> &[u8] ⓘ
unsafe_slice
only.§impl<S> FromSample<S> for S
impl<S> FromSample<S> for S
fn from_sample_(s: S) -> S
Source§impl<T> Hook for T
impl<T> Hook for T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
§fn in_current_span(self) -> Instrumented<Self> ⓘ
fn in_current_span(self) -> Instrumented<Self> ⓘ
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
fn into_sample(self) -> T
§impl<T> LayoutRaw for T
impl<T> LayoutRaw for T
§fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError> ⓘ
fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError> ⓘ
§impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
§unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
§fn resolve_niched(out: Place<NichedOption<T, N1>>)
fn resolve_niched(out: Place<NichedOption<T, N1>>)
out
indicating that a T
is niched.