Struct Arc
pub struct Arc<T>where
T: ?Sized,{ /* private fields */ }
alloc
only.Expand description
⚛️
?alloc
A thread-safe reference-counting pointer.
A thread-safe reference-counting pointer. ‘Arc’ stands for ‘Atomically Reference Counted’.
This is an equivalent to std::sync::Arc
, but using portable-atomic for synchronization.
See the documentation for std::sync::Arc
for more details.
Note: Unlike std::sync::Arc
, coercing Arc<T>
to Arc<U>
is only possible if
the optional cfg portable_atomic_unstable_coerce_unsized
is enabled, as documented at the crate-level documentation,
and this optional cfg item is only supported with Rust nightly version.
This is because coercing the pointee requires the
unstable CoerceUnsized
trait.
See this issue comment
for a workaround that works without depending on unstable features.
§Examples
use portable_atomic_util::Arc;
use std::thread;
let five = Arc::new(5);
for _ in 0..10 {
let five = Arc::clone(&five);
thread::spawn(move || {
assert_eq!(*five, 5);
});
}
Implementations§
§impl<T> Arc<T>
impl<T> Arc<T>
pub fn new_cyclic<F>(data_fn: F) -> Arc<T>
pub fn new_cyclic<F>(data_fn: F) -> Arc<T>
Constructs a new Arc<T>
while giving you a Weak<T>
to the allocation,
to allow you to construct a T
which holds a weak pointer to itself.
Generally, a structure circularly referencing itself, either directly or
indirectly, should not hold a strong reference to itself to prevent a memory leak.
Using this function, you get access to the weak pointer during the
initialization of T
, before the Arc<T>
is created, such that you can
clone and store it inside the T
.
new_cyclic
first allocates the managed allocation for the Arc<T>
,
then calls your closure, giving it a Weak<T>
to this allocation,
and only afterwards completes the construction of the Arc<T>
by placing
the T
returned from your closure into the allocation.
Since the new Arc<T>
is not fully-constructed until Arc<T>::new_cyclic
returns, calling upgrade
on the weak reference inside your closure will
fail and result in a None
value.
§Panics
If data_fn
panics, the panic is propagated to the caller, and the
temporary Weak<T>
is dropped normally.
§Example
use portable_atomic_util::{Arc, Weak};
struct Gadget {
me: Weak<Gadget>,
}
impl Gadget {
/// Constructs a reference counted Gadget.
fn new() -> Arc<Self> {
// `me` is a `Weak<Gadget>` pointing at the new allocation of the
// `Arc` we're constructing.
Arc::new_cyclic(|me| {
// Create the actual struct here.
Gadget { me: me.clone() }
})
}
/// Returns a reference counted pointer to Self.
fn me(&self) -> Arc<Self> {
self.me.upgrade().unwrap()
}
}
pub fn new_uninit() -> Arc<MaybeUninit<T>>
pub fn new_uninit() -> Arc<MaybeUninit<T>>
Constructs a new Arc
with uninitialized contents.
§Examples
use portable_atomic_util::Arc;
let mut five = Arc::<u32>::new_uninit();
// Deferred initialization:
Arc::get_mut(&mut five).unwrap().write(5);
let five = unsafe { five.assume_init() };
assert_eq!(*five, 5)
pub fn pin(data: T) -> Pin<Arc<T>>
pub fn pin(data: T) -> Pin<Arc<T>>
Constructs a new Pin<Arc<T>>
. If T
does not implement Unpin
, then
data
will be pinned in memory and unable to be moved.
pub fn try_unwrap(this: Arc<T>) -> Result<T, Arc<T>> ⓘ
pub fn try_unwrap(this: Arc<T>) -> Result<T, Arc<T>> ⓘ
Returns the inner value, if the Arc
has exactly one strong reference.
Otherwise, an Err
is returned with the same Arc
that was
passed in.
This will succeed even if there are outstanding weak references.
It is strongly recommended to use Arc::into_inner
instead if you don’t
keep the Arc
in the Err
case.
Immediately dropping the Err
-value, as the expression
Arc::try_unwrap(this).ok()
does, can cause the strong count to
drop to zero and the inner value of the Arc
to be dropped.
For instance, if two threads execute such an expression in parallel,
there is a race condition without the possibility of unsafety:
The threads could first both check whether they own the last instance
in Arc::try_unwrap
, determine that they both do not, and then both
discard and drop their instance in the call to ok
.
In this scenario, the value inside the Arc
is safely destroyed
by exactly one of the threads, but neither thread will ever be able
to use the value.
§Examples
use portable_atomic_util::Arc;
let x = Arc::new(3);
assert_eq!(Arc::try_unwrap(x), Ok(3));
let x = Arc::new(4);
let _y = Arc::clone(&x);
assert_eq!(*Arc::try_unwrap(x).unwrap_err(), 4);
pub fn into_inner(this: Arc<T>) -> Option<T> ⓘ
pub fn into_inner(this: Arc<T>) -> Option<T> ⓘ
Returns the inner value, if the Arc
has exactly one strong reference.
Otherwise, None
is returned and the Arc
is dropped.
This will succeed even if there are outstanding weak references.
If Arc::into_inner
is called on every clone of this Arc
,
it is guaranteed that exactly one of the calls returns the inner value.
This means in particular that the inner value is not dropped.
Arc::try_unwrap
is conceptually similar to Arc::into_inner
, but it
is meant for different use-cases. If used as a direct replacement
for Arc::into_inner
anyway, such as with the expression
Arc::try_unwrap(this).ok()
, then it does
not give the same guarantee as described in the previous paragraph.
For more information, see the examples below and read the documentation
of Arc::try_unwrap
.
§Examples
Minimal example demonstrating the guarantee that Arc::into_inner
gives.
use portable_atomic_util::Arc;
let x = Arc::new(3);
let y = Arc::clone(&x);
// Two threads calling `Arc::into_inner` on both clones of an `Arc`:
let x_thread = std::thread::spawn(|| Arc::into_inner(x));
let y_thread = std::thread::spawn(|| Arc::into_inner(y));
let x_inner_value = x_thread.join().unwrap();
let y_inner_value = y_thread.join().unwrap();
// One of the threads is guaranteed to receive the inner value:
assert!(matches!((x_inner_value, y_inner_value), (None, Some(3)) | (Some(3), None)));
// The result could also be `(None, None)` if the threads called
// `Arc::try_unwrap(x).ok()` and `Arc::try_unwrap(y).ok()` instead.
A more practical example demonstrating the need for Arc::into_inner
:
use portable_atomic_util::Arc;
// Definition of a simple singly linked list using `Arc`:
#[derive(Clone)]
struct LinkedList<T>(Option<Arc<Node<T>>>);
struct Node<T>(T, Option<Arc<Node<T>>>);
// Dropping a long `LinkedList<T>` relying on the destructor of `Arc`
// can cause a stack overflow. To prevent this, we can provide a
// manual `Drop` implementation that does the destruction in a loop:
impl<T> Drop for LinkedList<T> {
fn drop(&mut self) {
let mut link = self.0.take();
while let Some(arc_node) = link.take() {
if let Some(Node(_value, next)) = Arc::into_inner(arc_node) {
link = next;
}
}
}
}
// Implementation of `new` and `push` omitted
impl<T> LinkedList<T> {
/* ... */
}
// The following code could have still caused a stack overflow
// despite the manual `Drop` impl if that `Drop` impl had used
// `Arc::try_unwrap(arc).ok()` instead of `Arc::into_inner(arc)`.
// Create a long list and clone it
let mut x = LinkedList::new();
let size = 100000;
for i in 0..size {
x.push(i); // Adds i to the front of x
}
let y = x.clone();
// Drop the clones in parallel
let x_thread = std::thread::spawn(|| drop(x));
let y_thread = std::thread::spawn(|| drop(y));
x_thread.join().unwrap();
y_thread.join().unwrap();
§impl<T> Arc<[T]>
impl<T> Arc<[T]>
pub fn new_uninit_slice(len: usize) -> Arc<[MaybeUninit<T>]>
pub fn new_uninit_slice(len: usize) -> Arc<[MaybeUninit<T>]>
Constructs a new atomically reference-counted slice with uninitialized contents.
§Examples
use portable_atomic_util::Arc;
let mut values = Arc::<[u32]>::new_uninit_slice(3);
// Deferred initialization:
let data = Arc::get_mut(&mut values).unwrap();
data[0].write(1);
data[1].write(2);
data[2].write(3);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [1, 2, 3])
§impl<T> Arc<MaybeUninit<T>>
impl<T> Arc<MaybeUninit<T>>
pub unsafe fn assume_init(self) -> Arc<T>
pub unsafe fn assume_init(self) -> Arc<T>
Converts to Arc<T>
.
§Safety
As with MaybeUninit::assume_init
,
it is up to the caller to guarantee that the inner value
really is in an initialized state.
Calling this when the content is not yet fully initialized
causes immediate undefined behavior.
§Examples
use portable_atomic_util::Arc;
let mut five = Arc::<u32>::new_uninit();
// Deferred initialization:
Arc::get_mut(&mut five).unwrap().write(5);
let five = unsafe { five.assume_init() };
assert_eq!(*five, 5)
§impl<T> Arc<[MaybeUninit<T>]>
impl<T> Arc<[MaybeUninit<T>]>
pub unsafe fn assume_init(self) -> Arc<[T]>
pub unsafe fn assume_init(self) -> Arc<[T]>
Converts to Arc<[T]>
.
§Safety
As with MaybeUninit::assume_init
,
it is up to the caller to guarantee that the inner value
really is in an initialized state.
Calling this when the content is not yet fully initialized
causes immediate undefined behavior.
§Examples
use portable_atomic_util::Arc;
let mut values = Arc::<[u32]>::new_uninit_slice(3);
// Deferred initialization:
let data = Arc::get_mut(&mut values).unwrap();
data[0].write(1);
data[1].write(2);
data[2].write(3);
let values = unsafe { values.assume_init() };
assert_eq!(*values, [1, 2, 3])
§impl<T> Arc<T>where
T: ?Sized,
impl<T> Arc<T>where
T: ?Sized,
pub unsafe fn from_raw(ptr: *const T) -> Arc<T>
pub unsafe fn from_raw(ptr: *const T) -> Arc<T>
Constructs an Arc<T>
from a raw pointer.
§Safety
The raw pointer must have been previously returned by a call to
Arc<U>::into_raw
with the following requirements:
- If
U
is sized, it must have the same size and alignment asT
. This is trivially true ifU
isT
. - If
U
is unsized, its data pointer must have the same size and alignment asT
. This is trivially true ifArc<U>
was constructed throughArc<T>
and then converted toArc<U>
through an unsized coercion.
Note that if U
or U
’s data pointer is not T
but has the same size
and alignment, this is basically like transmuting references of
different types. See mem::transmute
for more information
on what restrictions apply in this case.
The user of from_raw
has to make sure a specific value of T
is only
dropped once.
This function is unsafe because improper use may lead to memory unsafety,
even if the returned Arc<T>
is never accessed.
§Examples
use portable_atomic_util::Arc;
let x = Arc::new("hello".to_owned());
let x_ptr = Arc::into_raw(x);
unsafe {
// Convert back to an `Arc` to prevent leak.
let x = Arc::from_raw(x_ptr);
assert_eq!(&*x, "hello");
// Further calls to `Arc::from_raw(x_ptr)` would be memory-unsafe.
}
// The memory was freed when `x` went out of scope above, so `x_ptr` is now dangling!
Convert a slice back into its original array:
use portable_atomic_util::Arc;
let x: Arc<[u32]> = Arc::from([1, 2, 3]);
let x_ptr: *const [u32] = Arc::into_raw(x);
unsafe {
let x: Arc<[u32; 3]> = Arc::from_raw(x_ptr.cast::<[u32; 3]>());
assert_eq!(&*x, &[1, 2, 3]);
}
pub unsafe fn increment_strong_count(ptr: *const T)
pub unsafe fn increment_strong_count(ptr: *const T)
Increments the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, and the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) for the duration of this method.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count(ptr);
// This assertion is deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw(ptr);
assert_eq!(2, Arc::strong_count(&five));
}
pub unsafe fn decrement_strong_count(ptr: *const T)
pub unsafe fn decrement_strong_count(ptr: *const T)
Decrements the strong reference count on the Arc<T>
associated with the
provided pointer by one.
§Safety
The pointer must have been obtained through Arc::into_raw
, and the
associated Arc
instance must be valid (i.e. the strong count must be at
least 1) when invoking this method. This method can be used to release the final
Arc
and backing storage, but should not be called after the final Arc
has been
released.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
unsafe {
let ptr = Arc::into_raw(five);
Arc::increment_strong_count(ptr);
// Those assertions are deterministic because we haven't shared
// the `Arc` between threads.
let five = Arc::from_raw(ptr);
assert_eq!(2, Arc::strong_count(&five));
Arc::decrement_strong_count(ptr);
assert_eq!(1, Arc::strong_count(&five));
}
§impl<T> Arc<T>where
T: ?Sized,
impl<T> Arc<T>where
T: ?Sized,
pub fn into_raw(this: Arc<T>) -> *const T
pub fn into_raw(this: Arc<T>) -> *const T
Consumes the Arc
, returning the wrapped pointer.
To avoid a memory leak the pointer must be converted back to an Arc
using
Arc::from_raw
.
§Examples
use portable_atomic_util::Arc;
let x = Arc::new("hello".to_owned());
let x_ptr = Arc::into_raw(x);
assert_eq!(unsafe { &*x_ptr }, "hello");
pub fn as_ptr(this: &Arc<T>) -> *const T
pub fn as_ptr(this: &Arc<T>) -> *const T
Provides a raw pointer to the data.
The counts are not affected in any way and the Arc
is not consumed. The pointer is valid for
as long as there are strong counts in the Arc
.
§Examples
use portable_atomic_util::Arc;
let x = Arc::new("hello".to_owned());
let y = Arc::clone(&x);
let x_ptr = Arc::as_ptr(&x);
assert_eq!(x_ptr, Arc::as_ptr(&y));
assert_eq!(unsafe { &*x_ptr }, "hello");
pub fn weak_count(this: &Arc<T>) -> usize
pub fn weak_count(this: &Arc<T>) -> usize
Gets the number of Weak
pointers to this allocation.
§Safety
This method by itself is safe, but using it correctly requires extra care. Another thread can change the weak count at any time, including potentially between calling this method and acting on the result.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
let _weak_five = Arc::downgrade(&five);
// This assertion is deterministic because we haven't shared
// the `Arc` or `Weak` between threads.
assert_eq!(1, Arc::weak_count(&five));
pub fn strong_count(this: &Arc<T>) -> usize
pub fn strong_count(this: &Arc<T>) -> usize
Gets the number of strong (Arc
) pointers to this allocation.
§Safety
This method by itself is safe, but using it correctly requires extra care. Another thread can change the strong count at any time, including potentially between calling this method and acting on the result.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
let _also_five = Arc::clone(&five);
// This assertion is deterministic because we haven't shared
// the `Arc` between threads.
assert_eq!(2, Arc::strong_count(&five));
pub fn ptr_eq(this: &Arc<T>, other: &Arc<T>) -> bool
pub fn ptr_eq(this: &Arc<T>, other: &Arc<T>) -> bool
Returns true
if the two Arc
s point to the same allocation in a vein similar to
ptr::eq
. This function ignores the metadata of dyn Trait
pointers.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
let same_five = Arc::clone(&five);
let other_five = Arc::new(5);
assert!(Arc::ptr_eq(&five, &same_five));
assert!(!Arc::ptr_eq(&five, &other_five));
§impl<T> Arc<T>where
T: CloneToUninit + ?Sized,
impl<T> Arc<T>where
T: CloneToUninit + ?Sized,
pub fn make_mut(this: &mut Arc<T>) -> &mut T
pub fn make_mut(this: &mut Arc<T>) -> &mut T
Makes a mutable reference into the given Arc
.
If there are other Arc
pointers to the same allocation, then make_mut
will
clone
the inner value to a new allocation to ensure unique ownership. This is also
referred to as clone-on-write.
However, if there are no other Arc
pointers to this allocation, but some Weak
pointers, then the Weak
pointers will be dissociated and the inner value will not
be cloned.
See also get_mut
, which will fail rather than cloning the inner value
or dissociating Weak
pointers.
§Examples
use portable_atomic_util::Arc;
let mut data = Arc::new(5);
*Arc::make_mut(&mut data) += 1; // Won't clone anything
let mut other_data = Arc::clone(&data); // Won't clone inner data
*Arc::make_mut(&mut data) += 1; // Clones inner data
*Arc::make_mut(&mut data) += 1; // Won't clone anything
*Arc::make_mut(&mut other_data) *= 2; // Won't clone anything
// Now `data` and `other_data` point to different allocations.
assert_eq!(*data, 8);
assert_eq!(*other_data, 12);
Weak
pointers will be dissociated:
use portable_atomic_util::Arc;
let mut data = Arc::new(75);
let weak = Arc::downgrade(&data);
assert!(75 == *data);
assert!(75 == *weak.upgrade().unwrap());
*Arc::make_mut(&mut data) += 1;
assert!(76 == *data);
assert!(weak.upgrade().is_none());
§impl<T> Arc<T>where
T: Clone,
impl<T> Arc<T>where
T: Clone,
pub fn unwrap_or_clone(this: Arc<T>) -> T
pub fn unwrap_or_clone(this: Arc<T>) -> T
If we have the only reference to T
then unwrap it. Otherwise, clone T
and return the
clone.
Assuming arc_t
is of type Arc<T>
, this function is functionally equivalent to
(*arc_t).clone()
, but will avoid cloning the inner value where possible.
§Examples
use portable_atomic_util::Arc;
use std::ptr;
let inner = String::from("test");
let ptr = inner.as_ptr();
let arc = Arc::new(inner);
let inner = Arc::unwrap_or_clone(arc);
// The inner value was not cloned
assert!(ptr::eq(ptr, inner.as_ptr()));
let arc = Arc::new(inner);
let arc2 = arc.clone();
let inner = Arc::unwrap_or_clone(arc);
// Because there were 2 references, we had to clone the inner value.
assert!(!ptr::eq(ptr, inner.as_ptr()));
// `arc2` is the last reference, so when we unwrap it we get back
// the original `String`.
let inner = Arc::unwrap_or_clone(arc2);
assert!(ptr::eq(ptr, inner.as_ptr()));
§impl<T> Arc<T>where
T: ?Sized,
impl<T> Arc<T>where
T: ?Sized,
pub fn get_mut(this: &mut Arc<T>) -> Option<&mut T> ⓘ
pub fn get_mut(this: &mut Arc<T>) -> Option<&mut T> ⓘ
Returns a mutable reference into the given Arc
, if there are
no other Arc
or Weak
pointers to the same allocation.
Returns None
otherwise, because it is not safe to
mutate a shared value.
See also make_mut
, which will clone
the inner value when there are other Arc
pointers.
§Examples
use portable_atomic_util::Arc;
let mut x = Arc::new(3);
*Arc::get_mut(&mut x).unwrap() = 4;
assert_eq!(*x, 4);
let _y = Arc::clone(&x);
assert!(Arc::get_mut(&mut x).is_none());
§impl Arc<dyn Any + Send + Sync>
impl Arc<dyn Any + Send + Sync>
pub fn downcast<T>(self) -> Result<Arc<T>, Arc<dyn Any + Send + Sync>> ⓘ
pub fn downcast<T>(self) -> Result<Arc<T>, Arc<dyn Any + Send + Sync>> ⓘ
Attempts to downcast the Arc<dyn Any + Send + Sync>
to a concrete type.
§Examples
use portable_atomic_util::Arc;
use std::any::Any;
fn print_if_string(value: Arc<dyn Any + Send + Sync>) {
if let Ok(string) = value.downcast::<String>() {
println!("String ({}): {}", string.len(), string);
}
}
let my_string = "Hello World".to_string();
print_if_string(Arc::from(Box::new(my_string) as Box<dyn Any + Send + Sync>));
print_if_string(Arc::from(Box::new(0i8) as Box<dyn Any + Send + Sync>));
// or with "--cfg portable_atomic_unstable_coerce_unsized" in RUSTFLAGS (requires Rust nightly):
// print_if_string(Arc::new(my_string));
// print_if_string(Arc::new(0i8));
Trait Implementations§
§impl<T> AsFd for Arc<T>
This impl allows implementing traits that require AsFd
on Arc.
impl<T> AsFd for Arc<T>
This impl allows implementing traits that require AsFd
on Arc.
use portable_atomic_util::Arc;
use std::net::UdpSocket;
trait MyTrait: AsFd {}
impl MyTrait for Arc<UdpSocket> {}
§fn as_fd(&self) -> BorrowedFd<'_>
fn as_fd(&self) -> BorrowedFd<'_>
§impl<T> AsRawFd for Arc<T>where
T: AsRawFd,
This impl allows implementing traits that require AsRawFd
on Arc.
impl<T> AsRawFd for Arc<T>where
T: AsRawFd,
This impl allows implementing traits that require AsRawFd
on Arc.
use portable_atomic_util::Arc;
use std::net::UdpSocket;
trait MyTrait: AsRawFd {}
impl MyTrait for Arc<UdpSocket> {}
Source§impl<T> BitSized<{$PTR_BITS * 1}> for Arc<T>
impl<T> BitSized<{$PTR_BITS * 1}> for Arc<T>
Source§const BIT_SIZE: usize = _
const BIT_SIZE: usize = _
Source§const MIN_BYTE_SIZE: usize = _
const MIN_BYTE_SIZE: usize = _
Source§fn bit_size(&self) -> usize
fn bit_size(&self) -> usize
Source§fn min_byte_size(&self) -> usize
fn min_byte_size(&self) -> usize
§impl<T> Clone for Arc<T>where
T: ?Sized,
impl<T> Clone for Arc<T>where
T: ?Sized,
§fn clone(&self) -> Arc<T>
fn clone(&self) -> Arc<T>
Makes a clone of the Arc
pointer.
This creates another pointer to the same allocation, increasing the strong reference count.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
let _ = Arc::clone(&five);
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more§impl<T> Drop for Arc<T>where
T: ?Sized,
impl<T> Drop for Arc<T>where
T: ?Sized,
§fn drop(&mut self)
fn drop(&mut self)
Drops the Arc
.
This will decrement the strong reference count. If the strong reference
count reaches zero then the only other references (if any) are
Weak
, so we drop
the inner value.
§Examples
use portable_atomic_util::Arc;
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
println!("dropped!");
}
}
let foo = Arc::new(Foo);
let foo2 = Arc::clone(&foo);
drop(foo); // Doesn't print anything
drop(foo2); // Prints "dropped!"
§impl<T> Error for Arc<T>
impl<T> Error for Arc<T>
§fn description(&self) -> &str ⓘ
fn description(&self) -> &str ⓘ
§fn cause(&self) -> Option<&dyn Error> ⓘ
fn cause(&self) -> Option<&dyn Error> ⓘ
§impl<'a, B> From<Cow<'a, B>> for Arc<B>
impl<'a, B> From<Cow<'a, B>> for Arc<B>
§fn from(cow: Cow<'a, B>) -> Arc<B>
fn from(cow: Cow<'a, B>) -> Arc<B>
Creates an atomically reference-counted pointer from a clone-on-write pointer by copying its content.
§Example
use portable_atomic_util::Arc;
use std::borrow::Cow;
let cow: Cow<'_, str> = Cow::Borrowed("eggplant");
let shared: Arc<str> = Arc::from(cow);
assert_eq!("eggplant", &shared[..]);
§impl<T> FromIterator<T> for Arc<[T]>
impl<T> FromIterator<T> for Arc<[T]>
§fn from_iter<I>(iter: I) -> Arc<[T]>where
I: IntoIterator<Item = T>,
fn from_iter<I>(iter: I) -> Arc<[T]>where
I: IntoIterator<Item = T>,
Takes each element in the Iterator
and collects it into an Arc<[T]>
.
§Performance characteristics
§The general case
In the general case, collecting into Arc<[T]>
is done by first
collecting into a Vec<T>
. That is, when writing the following:
use portable_atomic_util::Arc;
let evens: Arc<[u8]> = (0..10).filter(|&x| x % 2 == 0).collect();
this behaves as if we wrote:
use portable_atomic_util::Arc;
let evens: Arc<[u8]> = (0..10).filter(|&x| x % 2 == 0)
.collect::<Vec<_>>() // The first set of allocations happens here.
.into(); // A second allocation for `Arc<[T]>` happens here.
This will allocate as many times as needed for constructing the Vec<T>
and then it will allocate once for turning the Vec<T>
into the Arc<[T]>
.
§Iterators of known length
When your Iterator
implements TrustedLen
and is of an exact size,
a single allocation will be made for the Arc<[T]>
. For example:
use portable_atomic_util::Arc;
let evens: Arc<[u8]> = (0..10).collect(); // Just a single allocation happens here.
§impl<T> Ord for Arc<T>
impl<T> Ord for Arc<T>
§fn cmp(&self, other: &Arc<T>) -> Ordering
fn cmp(&self, other: &Arc<T>) -> Ordering
Comparison for two Arc
s.
The two are compared by calling cmp()
on their inner values.
§Examples
use portable_atomic_util::Arc;
use std::cmp::Ordering;
let five = Arc::new(5);
assert_eq!(Ordering::Less, five.cmp(&Arc::new(6)));
1.21.0 · Source§fn max(self, other: Self) -> Selfwhere
Self: Sized,
fn max(self, other: Self) -> Selfwhere
Self: Sized,
§impl<T> PartialEq for Arc<T>
impl<T> PartialEq for Arc<T>
§fn eq(&self, other: &Arc<T>) -> bool
fn eq(&self, other: &Arc<T>) -> bool
Equality for two Arc
s.
Two Arc
s are equal if their inner values are equal, even if they are
stored in different allocation.
If T
also implements Eq
(implying reflexivity of equality),
two Arc
s that point to the same allocation are always equal.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
assert!(five == Arc::new(5));
§fn ne(&self, other: &Arc<T>) -> bool
fn ne(&self, other: &Arc<T>) -> bool
Inequality for two Arc
s.
Two Arc
s are not equal if their inner values are not equal.
If T
also implements Eq
(implying reflexivity of equality),
two Arc
s that point to the same value are always equal.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
assert!(five != Arc::new(6));
§impl<T> PartialOrd for Arc<T>where
T: PartialOrd + ?Sized,
impl<T> PartialOrd for Arc<T>where
T: PartialOrd + ?Sized,
§fn partial_cmp(&self, other: &Arc<T>) -> Option<Ordering> ⓘ
fn partial_cmp(&self, other: &Arc<T>) -> Option<Ordering> ⓘ
Partial comparison for two Arc
s.
The two are compared by calling partial_cmp()
on their inner values.
§Examples
use portable_atomic_util::Arc;
use std::cmp::Ordering;
let five = Arc::new(5);
assert_eq!(Some(Ordering::Less), five.partial_cmp(&Arc::new(6)));
§fn lt(&self, other: &Arc<T>) -> bool
fn lt(&self, other: &Arc<T>) -> bool
Less-than comparison for two Arc
s.
The two are compared by calling <
on their inner values.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
assert!(five < Arc::new(6));
§fn le(&self, other: &Arc<T>) -> bool
fn le(&self, other: &Arc<T>) -> bool
‘Less than or equal to’ comparison for two Arc
s.
The two are compared by calling <=
on their inner values.
§Examples
use portable_atomic_util::Arc;
let five = Arc::new(5);
assert!(five <= Arc::new(5));
§impl Read for Arc<File>
impl Read for Arc<File>
§fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error> ⓘ
fn read(&mut self, buf: &mut [u8]) -> Result<usize, Error> ⓘ
§fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error> ⓘ
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error> ⓘ
read
, except that it reads into a slice of buffers. Read more§fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error> ⓘ
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error> ⓘ
buf
. Read more§fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error> ⓘ
fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error> ⓘ
buf
. Read moreSource§fn is_read_vectored(&self) -> bool
fn is_read_vectored(&self) -> bool
can_vector
)1.6.0 · Source§fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error> ⓘ
fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error> ⓘ
buf
. Read moreSource§fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error> ⓘ
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error> ⓘ
read_buf
)Source§fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error> ⓘ
fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error> ⓘ
read_buf
)cursor
. Read more1.0.0 · Source§fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Read
. Read more§impl Seek for Arc<File>
impl Seek for Arc<File>
§fn seek(&mut self, pos: SeekFrom) -> Result<u64, Error> ⓘ
fn seek(&mut self, pos: SeekFrom) -> Result<u64, Error> ⓘ
1.55.0 · Source§fn rewind(&mut self) -> Result<(), Error> ⓘ
fn rewind(&mut self) -> Result<(), Error> ⓘ
Source§fn stream_len(&mut self) -> Result<u64, Error> ⓘ
fn stream_len(&mut self) -> Result<u64, Error> ⓘ
seek_stream_len
)§impl Write for Arc<File>
impl Write for Arc<File>
§fn write(&mut self, buf: &[u8]) -> Result<usize, Error> ⓘ
fn write(&mut self, buf: &[u8]) -> Result<usize, Error> ⓘ
§fn flush(&mut self) -> Result<(), Error> ⓘ
fn flush(&mut self) -> Result<(), Error> ⓘ
Source§fn is_write_vectored(&self) -> bool
fn is_write_vectored(&self) -> bool
can_vector
)1.0.0 · Source§fn write_all(&mut self, buf: &[u8]) -> Result<(), Error> ⓘ
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error> ⓘ
Source§fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error> ⓘ
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error> ⓘ
write_all_vectored
)impl<T> Eq for Arc<T>
impl<T> Send for Arc<T>
impl<T> Sync for Arc<T>
impl<T> Unpin for Arc<T>where
T: ?Sized,
impl<T> UnwindSafe for Arc<T>where
T: RefUnwindSafe + ?Sized,
Auto Trait Implementations§
impl<T> Freeze for Arc<T>where
T: ?Sized,
impl<T> RefUnwindSafe for Arc<T>where
T: RefUnwindSafe + ?Sized,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> ByteSized for T
impl<T> ByteSized for T
Source§const BYTE_ALIGN: usize = _
const BYTE_ALIGN: usize = _
Source§fn byte_align(&self) -> usize
fn byte_align(&self) -> usize
Source§fn ptr_size_ratio(&self) -> [usize; 2]
fn ptr_size_ratio(&self) -> [usize; 2]
Source§impl<T, R> Chain<R> for Twhere
T: ?Sized,
impl<T, R> Chain<R> for Twhere
T: ?Sized,
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<Q, K> Comparable<K> for Q
impl<Q, K> Comparable<K> for Q
§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.§impl<T> ExecutableCommand for T
impl<T> ExecutableCommand for T
§fn execute(&mut self, command: impl Command) -> Result<&mut T, Error> ⓘ
fn execute(&mut self, command: impl Command) -> Result<&mut T, Error> ⓘ
Executes the given command directly.
The given command its ANSI escape code will be written and flushed onto Self
.
§Arguments
-
The command that you want to execute directly.
§Example
use std::io;
use crossterm::{ExecutableCommand, style::Print};
fn main() -> io::Result<()> {
// will be executed directly
io::stdout()
.execute(Print("sum:\n".to_string()))?
.execute(Print(format!("1 + 1= {} ", 1 + 1)))?;
Ok(())
// ==== Output ====
// sum:
// 1 + 1 = 2
}
Have a look over at the Command API for more details.
§Notes
- In the case of UNIX and Windows 10, ANSI codes are written to the given ‘writer’.
- In case of Windows versions lower than 10, a direct WinAPI call will be made.
The reason for this is that Windows versions lower than 10 do not support ANSI codes,
and can therefore not be written to the given
writer
. Therefore, there is no difference between execute and queue for those old Windows versions.
Source§impl<T> ExtAny for T
impl<T> ExtAny for T
Source§fn type_hash_with<H: Hasher>(&self, hasher: H) -> u64
fn type_hash_with<H: Hasher>(&self, hasher: H) -> u64
TypeId
of Self
using a custom hasher.Source§fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
Source§impl<T> ExtMem for Twhere
T: ?Sized,
impl<T> ExtMem for Twhere
T: ?Sized,
Source§const NEEDS_DROP: bool = _
const NEEDS_DROP: bool = _
Source§fn mem_align_of<T>() -> usize
fn mem_align_of<T>() -> usize
Source§fn mem_align_of_val(&self) -> usize
fn mem_align_of_val(&self) -> usize
Source§fn mem_size_of<T>() -> usize
fn mem_size_of<T>() -> usize
Source§fn mem_size_of_val(&self) -> usize
fn mem_size_of_val(&self) -> usize
Source§fn mem_needs_drop(&self) -> bool
fn mem_needs_drop(&self) -> bool
true
if dropping values of this type matters. Read moreSource§fn mem_forget(self)where
Self: Sized,
fn mem_forget(self)where
Self: Sized,
self
without running its destructor. Read moreSource§fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
Source§unsafe fn mem_zeroed<T>() -> T
unsafe fn mem_zeroed<T>() -> T
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe_layout
only.T
represented by the all-zero byte-pattern. Read moreSource§fn mem_as_bytes(&self) -> &[u8] ⓘ
fn mem_as_bytes(&self) -> &[u8] ⓘ
unsafe_slice
only.§impl<S> FromSample<S> for S
impl<S> FromSample<S> for S
fn from_sample_(s: S) -> S
Source§impl<T> Hook for T
impl<T> Hook for T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
§fn in_current_span(self) -> Instrumented<Self> ⓘ
fn in_current_span(self) -> Instrumented<Self> ⓘ
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
fn into_sample(self) -> T
§impl<T> Pointable for T
impl<T> Pointable for T
§impl<T> QueueableCommand for T
impl<T> QueueableCommand for T
§fn queue(&mut self, command: impl Command) -> Result<&mut T, Error> ⓘ
fn queue(&mut self, command: impl Command) -> Result<&mut T, Error> ⓘ
Queues the given command for further execution.
Queued commands will be executed in the following cases:
- When
flush
is called manually on the given type implementingio::Write
. - The terminal will
flush
automatically if the buffer is full. - Each line is flushed in case of
stdout
, because it is line buffered.
§Arguments
-
The command that you want to queue for later execution.
§Examples
use std::io::{self, Write};
use crossterm::{QueueableCommand, style::Print};
fn main() -> io::Result<()> {
let mut stdout = io::stdout();
// `Print` will executed executed when `flush` is called.
stdout
.queue(Print("foo 1\n".to_string()))?
.queue(Print("foo 2".to_string()))?;
// some other code (no execution happening here) ...
// when calling `flush` on `stdout`, all commands will be written to the stdout and therefore executed.
stdout.flush()?;
Ok(())
// ==== Output ====
// foo 1
// foo 2
}
Have a look over at the Command API for more details.
§Notes
- In the case of UNIX and Windows 10, ANSI codes are written to the given ‘writer’.
- In case of Windows versions lower than 10, a direct WinAPI call will be made.
The reason for this is that Windows versions lower than 10 do not support ANSI codes,
and can therefore not be written to the given
writer
. Therefore, there is no difference between execute and queue for those old Windows versions.
§impl<R> ReadBytesExt for R
impl<R> ReadBytesExt for R
§fn read_u8(&mut self) -> Result<u8, Error> ⓘ
fn read_u8(&mut self) -> Result<u8, Error> ⓘ
§fn read_i8(&mut self) -> Result<i8, Error> ⓘ
fn read_i8(&mut self) -> Result<i8, Error> ⓘ
§fn read_u16<T>(&mut self) -> Result<u16, Error> ⓘwhere
T: ByteOrder,
fn read_u16<T>(&mut self) -> Result<u16, Error> ⓘwhere
T: ByteOrder,
§fn read_i16<T>(&mut self) -> Result<i16, Error> ⓘwhere
T: ByteOrder,
fn read_i16<T>(&mut self) -> Result<i16, Error> ⓘwhere
T: ByteOrder,
§fn read_u24<T>(&mut self) -> Result<u32, Error> ⓘwhere
T: ByteOrder,
fn read_u24<T>(&mut self) -> Result<u32, Error> ⓘwhere
T: ByteOrder,
§fn read_i24<T>(&mut self) -> Result<i32, Error> ⓘwhere
T: ByteOrder,
fn read_i24<T>(&mut self) -> Result<i32, Error> ⓘwhere
T: ByteOrder,
§fn read_u32<T>(&mut self) -> Result<u32, Error> ⓘwhere
T: ByteOrder,
fn read_u32<T>(&mut self) -> Result<u32, Error> ⓘwhere
T: ByteOrder,
§fn read_i32<T>(&mut self) -> Result<i32, Error> ⓘwhere
T: ByteOrder,
fn read_i32<T>(&mut self) -> Result<i32, Error> ⓘwhere
T: ByteOrder,
§fn read_u48<T>(&mut self) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
fn read_u48<T>(&mut self) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
§fn read_i48<T>(&mut self) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
fn read_i48<T>(&mut self) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
§fn read_u64<T>(&mut self) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
fn read_u64<T>(&mut self) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
§fn read_i64<T>(&mut self) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
fn read_i64<T>(&mut self) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
§fn read_u128<T>(&mut self) -> Result<u128, Error> ⓘwhere
T: ByteOrder,
fn read_u128<T>(&mut self) -> Result<u128, Error> ⓘwhere
T: ByteOrder,
§fn read_i128<T>(&mut self) -> Result<i128, Error> ⓘwhere
T: ByteOrder,
fn read_i128<T>(&mut self) -> Result<i128, Error> ⓘwhere
T: ByteOrder,
§fn read_uint<T>(&mut self, nbytes: usize) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
fn read_uint<T>(&mut self, nbytes: usize) -> Result<u64, Error> ⓘwhere
T: ByteOrder,
§fn read_int<T>(&mut self, nbytes: usize) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
fn read_int<T>(&mut self, nbytes: usize) -> Result<i64, Error> ⓘwhere
T: ByteOrder,
§fn read_uint128<T>(&mut self, nbytes: usize) -> Result<u128, Error> ⓘwhere
T: ByteOrder,
fn read_uint128<T>(&mut self, nbytes: usize) -> Result<u128, Error> ⓘwhere
T: ByteOrder,
§fn read_int128<T>(&mut self, nbytes: usize) -> Result<i128, Error> ⓘwhere
T: ByteOrder,
fn read_int128<T>(&mut self, nbytes: usize) -> Result<i128, Error> ⓘwhere
T: ByteOrder,
§impl<'a, T, N> StringZilla<'a, N> for T
impl<'a, T, N> StringZilla<'a, N> for T
§fn sz_find_char_from(&self, needles: N) -> Option<usize> ⓘ
fn sz_find_char_from(&self, needles: N) -> Option<usize> ⓘ
§fn sz_rfind_char_from(&self, needles: N) -> Option<usize> ⓘ
fn sz_rfind_char_from(&self, needles: N) -> Option<usize> ⓘ
§fn sz_find_char_not_from(&self, needles: N) -> Option<usize> ⓘ
fn sz_find_char_not_from(&self, needles: N) -> Option<usize> ⓘ
§fn sz_rfind_char_not_from(&self, needles: N) -> Option<usize> ⓘ
fn sz_rfind_char_not_from(&self, needles: N) -> Option<usize> ⓘ
§fn sz_edit_distance(&self, other: N) -> usize
fn sz_edit_distance(&self, other: N) -> usize
§fn sz_alignment_score(
&self,
other: N,
matrix: [[i8; 256]; 256],
gap: i8,
) -> isize
fn sz_alignment_score( &self, other: N, matrix: [[i8; 256]; 256], gap: i8, ) -> isize
self
and other
using the specified
substitution matrix and gap penalty. Read more§fn sz_matches(&'a self, needle: &'a N) -> RangeMatches<'a> ⓘ
fn sz_matches(&'a self, needle: &'a N) -> RangeMatches<'a> ⓘ
§fn sz_rmatches(&'a self, needle: &'a N) -> RangeRMatches<'a> ⓘ
fn sz_rmatches(&'a self, needle: &'a N) -> RangeRMatches<'a> ⓘ
needle
in self
, searching from the end. Read more§fn sz_splits(&'a self, needle: &'a N) -> RangeSplits<'a> ⓘ
fn sz_splits(&'a self, needle: &'a N) -> RangeSplits<'a> ⓘ
§fn sz_rsplits(&'a self, needle: &'a N) -> RangeRSplits<'a> ⓘ
fn sz_rsplits(&'a self, needle: &'a N) -> RangeRSplits<'a> ⓘ
self
that are separated by the given needle
, searching from the end. Read more§fn sz_find_first_of(&'a self, needles: &'a N) -> RangeMatches<'a> ⓘ
fn sz_find_first_of(&'a self, needles: &'a N) -> RangeMatches<'a> ⓘ
needles
within self
. Read more§fn sz_find_last_of(&'a self, needles: &'a N) -> RangeRMatches<'a> ⓘ
fn sz_find_last_of(&'a self, needles: &'a N) -> RangeRMatches<'a> ⓘ
needles
within self
, searching from the end. Read more§fn sz_find_first_not_of(&'a self, needles: &'a N) -> RangeMatches<'a> ⓘ
fn sz_find_first_not_of(&'a self, needles: &'a N) -> RangeMatches<'a> ⓘ
needles
within self
. Read more§fn sz_find_last_not_of(&'a self, needles: &'a N) -> RangeRMatches<'a> ⓘ
fn sz_find_last_not_of(&'a self, needles: &'a N) -> RangeRMatches<'a> ⓘ
needles
within self
, searching from the end. Read more§impl<W> SynchronizedUpdate for W
impl<W> SynchronizedUpdate for W
§fn sync_update<T>(
&mut self,
operations: impl FnOnce(&mut W) -> T,
) -> Result<T, Error> ⓘ
fn sync_update<T>( &mut self, operations: impl FnOnce(&mut W) -> T, ) -> Result<T, Error> ⓘ
Performs a set of actions within a synchronous update.
Updates will be suspended in the terminal, the function will be executed against self, updates will be resumed, and a flush will be performed.
§Arguments
-
Function
A function that performs the operations that must execute in a synchronized update.
§Examples
use std::io;
use crossterm::{ExecutableCommand, SynchronizedUpdate, style::Print};
fn main() -> io::Result<()> {
let mut stdout = io::stdout();
stdout.sync_update(|stdout| {
stdout.execute(Print("foo 1\n".to_string()))?;
stdout.execute(Print("foo 2".to_string()))?;
// The effects of the print command will not be present in the terminal
// buffer, but not visible in the terminal.
std::io::Result::Ok(())
})?;
// The effects of the commands will be visible.
Ok(())
// ==== Output ====
// foo 1
// foo 2
}
§Notes
This command is performed only using ANSI codes, and will do nothing on terminals that do not support ANSI codes, or this specific extension.
When rendering the screen of the terminal, the Emulator usually iterates through each visible grid cell and renders its current state. With applications updating the screen a at higher frequency this can cause tearing.
This mode attempts to mitigate that.
When the synchronization mode is enabled following render calls will keep rendering the last rendered state. The terminal Emulator keeps processing incoming text and sequences. When the synchronized update mode is disabled again the renderer may fetch the latest screen buffer state again, effectively avoiding the tearing effect by unintentionally rendering in the middle a of an application screen update.