Struct AtomicF64
#[repr(C, align(8))]pub struct AtomicF64 { /* private fields */ }
dep_portable_atomic
only.Expand description
A floating point type which can be safely shared between threads.
This type has the same in-memory representation as the underlying floating point type,
f64
.
Implementations§
§impl AtomicF64
impl AtomicF64
pub const unsafe fn from_ptr<'a>(ptr: *mut f64) -> &'a AtomicF64
pub const unsafe fn from_ptr<'a>(ptr: *mut f64) -> &'a AtomicF64
Creates a new reference to an atomic float from a pointer.
This is const fn
on Rust 1.83+.
§Safety
ptr
must be aligned toalign_of::<AtomicF64>()
(note that on some platforms this can be bigger thanalign_of::<f64>()
).ptr
must be valid for both reads and writes for the whole lifetime'a
.- If this atomic type is lock-free, non-atomic accesses to the value
behind
ptr
must have a happens-before relationship with atomic accesses via the returned value (or vice-versa).- In other words, time periods where the value is accessed atomically may not overlap with periods where the value is accessed non-atomically.
- This requirement is trivially satisfied if
ptr
is never used non-atomically for the duration of lifetime'a
. Most use cases should be able to follow this guideline. - This requirement is also trivially satisfied if all accesses (atomic or not) are done from the same thread.
- If this atomic type is not lock-free:
- Any accesses to the value behind
ptr
must have a happens-before relationship with accesses via the returned value (or vice-versa). - Any concurrent accesses to the value behind
ptr
for the duration of lifetime'a
must be compatible with operations performed by this atomic type.
- Any accesses to the value behind
- This method must not be used to create overlapping or mixed-size atomic accesses, as these are not supported by the memory model.
pub fn is_lock_free() -> bool
pub fn is_lock_free() -> bool
Returns true
if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
pub const fn is_always_lock_free() -> bool
pub const fn is_always_lock_free() -> bool
Returns true
if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
Note: If the atomic operation relies on dynamic CPU feature detection, this type may be lock-free even if the function returns false.
pub const fn get_mut(&mut self) -> &mut f64 ⓘ
pub const fn get_mut(&mut self) -> &mut f64 ⓘ
Returns a mutable reference to the underlying float.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
This is const fn
on Rust 1.83+.
pub const fn into_inner(self) -> f64 ⓘ
pub const fn into_inner(self) -> f64 ⓘ
Consumes the atomic and returns the contained value.
This is safe because passing self
by value guarantees that no other threads are
concurrently accessing the atomic data.
This is const fn
on Rust 1.56+.
pub fn swap(&self, val: f64, order: Ordering) -> f64 ⓘ
pub fn swap(&self, val: f64, order: Ordering) -> f64 ⓘ
Stores a value into the atomic float, returning the previous value.
swap
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn compare_exchange(
&self,
current: f64,
new: f64,
success: Ordering,
failure: Ordering,
) -> Result<f64, f64> ⓘ
pub fn compare_exchange( &self, current: f64, new: f64, success: Ordering, failure: Ordering, ) -> Result<f64, f64> ⓘ
Stores a value into the atomic float if the current value is the same as
the current
value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current
.
compare_exchange
takes two Ordering
arguments to describe the memory
ordering of this operation. success
describes the required ordering for the
read-modify-write operation that takes place if the comparison with current
succeeds.
failure
describes the required ordering for the load operation that takes place when
the comparison fails. Using [Acquire
] as success ordering makes the store part
of this operation [Relaxed
], and using [Release
] makes the successful load
[Relaxed
]. The failure ordering can only be [SeqCst
], [Acquire
] or [Relaxed
].
§Panics
Panics if failure
is [Release
], [AcqRel
].
pub fn compare_exchange_weak(
&self,
current: f64,
new: f64,
success: Ordering,
failure: Ordering,
) -> Result<f64, f64> ⓘ
pub fn compare_exchange_weak( &self, current: f64, new: f64, success: Ordering, failure: Ordering, ) -> Result<f64, f64> ⓘ
Stores a value into the atomic float if the current value is the same as
the current
value.
Unlike compare_exchange
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak
takes two Ordering
arguments to describe the memory
ordering of this operation. success
describes the required ordering for the
read-modify-write operation that takes place if the comparison with current
succeeds.
failure
describes the required ordering for the load operation that takes place when
the comparison fails. Using [Acquire
] as success ordering makes the store part
of this operation [Relaxed
], and using [Release
] makes the successful load
[Relaxed
]. The failure ordering can only be [SeqCst
], [Acquire
] or [Relaxed
].
§Panics
Panics if failure
is [Release
], [AcqRel
].
pub fn fetch_add(&self, val: f64, order: Ordering) -> f64 ⓘ
pub fn fetch_add(&self, val: f64, order: Ordering) -> f64 ⓘ
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn fetch_sub(&self, val: f64, order: Ordering) -> f64 ⓘ
pub fn fetch_sub(&self, val: f64, order: Ordering) -> f64 ⓘ
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<f64, f64> ⓘ
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<f64, f64> ⓘ
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result
of Ok(previous_value)
if the function returned Some(_)
, else
Err(previous_value)
.
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_)
, but the function will have been applied
only once to the stored value.
fetch_update
takes two Ordering
arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
compare_exchange
respectively.
Using [Acquire
] as success ordering makes the store part
of this operation [Relaxed
], and using [Release
] makes the final successful load
[Relaxed
]. The (failed) load ordering can only be [SeqCst
], [Acquire
] or [Relaxed
].
§Panics
Panics if fetch_order
is [Release
], [AcqRel
].
§Considerations
This method is not magic; it is not provided by the hardware.
It is implemented in terms of compare_exchange_weak
,
and suffers from the same drawbacks.
In particular, this method will not circumvent the ABA Problem.
pub fn fetch_max(&self, val: f64, order: Ordering) -> f64 ⓘ
pub fn fetch_max(&self, val: f64, order: Ordering) -> f64 ⓘ
Maximum with the current value.
Finds the maximum of the current value and the argument val
, and
sets the new value to the result.
Returns the previous value.
fetch_max
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn fetch_min(&self, val: f64, order: Ordering) -> f64 ⓘ
pub fn fetch_min(&self, val: f64, order: Ordering) -> f64 ⓘ
Minimum with the current value.
Finds the minimum of the current value and the argument val
, and
sets the new value to the result.
Returns the previous value.
fetch_min
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn fetch_neg(&self, order: Ordering) -> f64 ⓘ
pub fn fetch_neg(&self, order: Ordering) -> f64 ⓘ
Negates the current value, and sets the new value to the result.
Returns the previous value.
fetch_neg
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub fn fetch_abs(&self, order: Ordering) -> f64 ⓘ
pub fn fetch_abs(&self, order: Ordering) -> f64 ⓘ
Computes the absolute value of the current value, and sets the new value to the result.
Returns the previous value.
fetch_abs
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire
] makes the store part of this operation [Relaxed
], and
using [Release
] makes the load part [Relaxed
].
pub const fn as_bits(&self) -> &AtomicU64
pub const fn as_bits(&self) -> &AtomicU64
Raw transmutation to &AtomicU64
.
See f64::from_bits
for some discussion of the
portability of this operation (there are almost no issues).
This is const fn
on Rust 1.58+.
pub const fn as_ptr(&self) -> *mut f64
pub const fn as_ptr(&self) -> *mut f64
Returns a mutable pointer to the underlying float.
Returning an *mut
pointer from a shared reference to this atomic is
safe because the atomic types work with interior mutability. Any use of
the returned raw pointer requires an unsafe
block and has to uphold
the safety requirements. If there is concurrent access, note the following
additional safety requirements:
- If this atomic type is lock-free, any concurrent operations on it must be atomic.
- Otherwise, any concurrent operations on it must be compatible with operations performed by this atomic type.
This is const fn
on Rust 1.58+.
Trait Implementations§
Source§impl BitSized<64> for AtomicF64
impl BitSized<64> for AtomicF64
Source§const BIT_SIZE: usize = _
const BIT_SIZE: usize = _
Source§const MIN_BYTE_SIZE: usize = _
const MIN_BYTE_SIZE: usize = _
Source§impl ConstDefault for AtomicF64
impl ConstDefault for AtomicF64
impl RefUnwindSafe for AtomicF64
Auto Trait Implementations§
impl !Freeze for AtomicF64
impl Send for AtomicF64
impl Sync for AtomicF64
impl Unpin for AtomicF64
impl UnwindSafe for AtomicF64
Blanket Implementations§
§impl<T> ArchivePointee for T
impl<T> ArchivePointee for T
§type ArchivedMetadata = ()
type ArchivedMetadata = ()
§fn pointer_metadata(
_: &<T as ArchivePointee>::ArchivedMetadata,
) -> <T as Pointee>::Metadata
fn pointer_metadata( _: &<T as ArchivePointee>::ArchivedMetadata, ) -> <T as Pointee>::Metadata
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> ByteSized for T
impl<T> ByteSized for T
Source§const BYTE_ALIGN: usize = _
const BYTE_ALIGN: usize = _
Source§fn byte_align(&self) -> usize ⓘ
fn byte_align(&self) -> usize ⓘ
Source§fn ptr_size_ratio(&self) -> [usize; 2]
fn ptr_size_ratio(&self) -> [usize; 2]
Source§impl<T, R> Chain<R> for Twhere
T: ?Sized,
impl<T, R> Chain<R> for Twhere
T: ?Sized,
Source§impl<T> ExtAny for T
impl<T> ExtAny for T
Source§fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
Source§impl<T> ExtMem for Twhere
T: ?Sized,
impl<T> ExtMem for Twhere
T: ?Sized,
Source§const NEEDS_DROP: bool = _
const NEEDS_DROP: bool = _
Source§fn mem_align_of_val(&self) -> usize ⓘ
fn mem_align_of_val(&self) -> usize ⓘ
Source§fn mem_size_of_val(&self) -> usize ⓘ
fn mem_size_of_val(&self) -> usize ⓘ
Source§fn mem_needs_drop(&self) -> bool
fn mem_needs_drop(&self) -> bool
true
if dropping values of this type matters. Read moreSource§fn mem_forget(self)where
Self: Sized,
fn mem_forget(self)where
Self: Sized,
self
without running its destructor. Read moreSource§fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
Source§unsafe fn mem_zeroed<T>() -> T
unsafe fn mem_zeroed<T>() -> T
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§fn mem_as_bytes(&self) -> &[u8] ⓘ
fn mem_as_bytes(&self) -> &[u8] ⓘ
unsafe_slice
only.§impl<S> FromSample<S> for S
impl<S> FromSample<S> for S
fn from_sample_(s: S) -> S
Source§impl<T> Hook for T
impl<T> Hook for T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
§fn in_current_span(self) -> Instrumented<Self> ⓘ
fn in_current_span(self) -> Instrumented<Self> ⓘ
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
fn into_sample(self) -> T
§impl<T> LayoutRaw for T
impl<T> LayoutRaw for T
§fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError> ⓘ
fn layout_raw(_: <T as Pointee>::Metadata) -> Result<Layout, LayoutError> ⓘ
§impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
impl<T, N1, N2> Niching<NichedOption<T, N1>> for N2
§unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
unsafe fn is_niched(niched: *const NichedOption<T, N1>) -> bool
§fn resolve_niched(out: Place<NichedOption<T, N1>>)
fn resolve_niched(out: Place<NichedOption<T, N1>>)
out
indicating that a T
is niched.