Struct RwLockWriteGuard
pub struct RwLockWriteGuard<'a, T>where
T: ?Sized,{ /* private fields */ }dep_tokio and std only.Expand description
Implementations§
§impl<'a, T> RwLockWriteGuard<'a, T>where
T: ?Sized,
impl<'a, T> RwLockWriteGuard<'a, T>where
T: ?Sized,
pub fn map<F, U>(
this: RwLockWriteGuard<'a, T>,
f: F,
) -> RwLockMappedWriteGuard<'a, U>
pub fn map<F, U>( this: RwLockWriteGuard<'a, T>, f: F, ) -> RwLockMappedWriteGuard<'a, U>
Makes a new RwLockMappedWriteGuard for a component of the locked data.
This operation cannot fail as the RwLockWriteGuard passed in already
locked the data.
This is an associated function that needs to be used as
RwLockWriteGuard::map(..). A method would interfere with methods of
the same name on the contents of the locked data.
This is an asynchronous version of RwLockWriteGuard::map from the
parking_lot crate.
§Examples
use tokio::sync::{RwLock, RwLockWriteGuard};
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
struct Foo(u32);
let lock = RwLock::new(Foo(1));
{
let mut mapped = RwLockWriteGuard::map(lock.write().await, |f| &mut f.0);
*mapped = 2;
}
assert_eq!(Foo(2), *lock.read().await);pub fn downgrade_map<F, U>(
this: RwLockWriteGuard<'a, T>,
f: F,
) -> RwLockReadGuard<'a, U>
pub fn downgrade_map<F, U>( this: RwLockWriteGuard<'a, T>, f: F, ) -> RwLockReadGuard<'a, U>
Makes a new RwLockReadGuard for a component of the locked data.
This operation cannot fail as the RwLockWriteGuard passed in already
locked the data.
This is an associated function that needs to be used as
RwLockWriteGuard::downgrade_map(..). A method would interfere with methods of
the same name on the contents of the locked data.
This is equivalent to a combination of asynchronous RwLockWriteGuard::map and RwLockWriteGuard::downgrade
from the parking_lot crate.
Inside of f, you retain exclusive access to the data, despite only being given a &T. Handing out a
&mut T would result in unsoundness, as you could use interior mutability.
§Examples
use tokio::sync::{RwLock, RwLockWriteGuard};
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
struct Foo(u32);
let lock = RwLock::new(Foo(1));
let mapped = RwLockWriteGuard::downgrade_map(lock.write().await, |f| &f.0);
let foo = lock.read().await;
assert_eq!(foo.0, *mapped);pub fn try_map<F, U>(
this: RwLockWriteGuard<'a, T>,
f: F,
) -> Result<RwLockMappedWriteGuard<'a, U>, RwLockWriteGuard<'a, T>> ⓘ
pub fn try_map<F, U>( this: RwLockWriteGuard<'a, T>, f: F, ) -> Result<RwLockMappedWriteGuard<'a, U>, RwLockWriteGuard<'a, T>> ⓘ
Attempts to make a new RwLockMappedWriteGuard for a component of
the locked data. The original guard is returned if the closure returns
None.
This operation cannot fail as the RwLockWriteGuard passed in already
locked the data.
This is an associated function that needs to be
used as RwLockWriteGuard::try_map(...). A method would interfere with
methods of the same name on the contents of the locked data.
This is an asynchronous version of RwLockWriteGuard::try_map from
the parking_lot crate.
§Examples
use tokio::sync::{RwLock, RwLockWriteGuard};
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
struct Foo(u32);
let lock = RwLock::new(Foo(1));
{
let guard = lock.write().await;
let mut guard = RwLockWriteGuard::try_map(guard, |f| Some(&mut f.0)).expect("should not fail");
*guard = 2;
}
assert_eq!(Foo(2), *lock.read().await);pub fn try_downgrade_map<F, U>(
this: RwLockWriteGuard<'a, T>,
f: F,
) -> Result<RwLockReadGuard<'a, U>, RwLockWriteGuard<'a, T>> ⓘ
pub fn try_downgrade_map<F, U>( this: RwLockWriteGuard<'a, T>, f: F, ) -> Result<RwLockReadGuard<'a, U>, RwLockWriteGuard<'a, T>> ⓘ
Attempts to make a new RwLockReadGuard for a component of
the locked data. The original guard is returned if the closure returns
None.
This operation cannot fail as the RwLockWriteGuard passed in already
locked the data.
This is an associated function that needs to be
used as RwLockWriteGuard::try_downgrade_map(...). A method would interfere with
methods of the same name on the contents of the locked data.
This is equivalent to a combination of asynchronous RwLockWriteGuard::try_map and RwLockWriteGuard::downgrade
from the parking_lot crate.
Inside of f, you retain exclusive access to the data, despite only being given a &T. Handing out a
&mut T would result in unsoundness, as you could use interior mutability.
If this function returns Err(...), the lock is never unlocked nor downgraded.
§Examples
use tokio::sync::{RwLock, RwLockWriteGuard};
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
struct Foo(u32);
let lock = RwLock::new(Foo(1));
let guard = RwLockWriteGuard::try_downgrade_map(lock.write().await, |f| Some(&f.0)).expect("should not fail");
let foo = lock.read().await;
assert_eq!(foo.0, *guard);pub fn into_mapped(
this: RwLockWriteGuard<'a, T>,
) -> RwLockMappedWriteGuard<'a, T>
pub fn into_mapped( this: RwLockWriteGuard<'a, T>, ) -> RwLockMappedWriteGuard<'a, T>
Converts this RwLockWriteGuard into an RwLockMappedWriteGuard. This
method can be used to store a non-mapped guard in a struct field that
expects a mapped guard.
This is equivalent to calling RwLockWriteGuard::map(guard, |me| me).
pub fn downgrade(self) -> RwLockReadGuard<'a, T>
pub fn downgrade(self) -> RwLockReadGuard<'a, T>
Atomically downgrades a write lock into a read lock without allowing any writers to take exclusive access of the lock in the meantime.
Note: This won’t necessarily allow any additional readers to acquire
locks, since RwLock is fair and it is possible that a writer is next
in line.
Returns an RAII guard which will drop this read access of the RwLock
when dropped.
§Examples
let lock = Arc::new(RwLock::new(1));
let n = lock.write().await;
let cloned_lock = lock.clone();
let handle = tokio::spawn(async move {
*cloned_lock.write().await = 2;
});
let n = n.downgrade();
assert_eq!(*n, 1, "downgrade is atomic");
drop(n);
handle.await.unwrap();
assert_eq!(*lock.read().await, 2, "second writer obtained write lock");Trait Implementations§
§impl<'a, T> Debug for RwLockWriteGuard<'a, T>
impl<'a, T> Debug for RwLockWriteGuard<'a, T>
§impl<T> Deref for RwLockWriteGuard<'_, T>where
T: ?Sized,
impl<T> Deref for RwLockWriteGuard<'_, T>where
T: ?Sized,
§impl<T> DerefMut for RwLockWriteGuard<'_, T>where
T: ?Sized,
impl<T> DerefMut for RwLockWriteGuard<'_, T>where
T: ?Sized,
§impl<'a, T> Display for RwLockWriteGuard<'a, T>
impl<'a, T> Display for RwLockWriteGuard<'a, T>
§impl<'a, T> Drop for RwLockWriteGuard<'a, T>where
T: ?Sized,
impl<'a, T> Drop for RwLockWriteGuard<'a, T>where
T: ?Sized,
impl<T> Send for RwLockWriteGuard<'_, T>
impl<T> Sync for RwLockWriteGuard<'_, T>
Auto Trait Implementations§
impl<'a, T> Freeze for RwLockWriteGuard<'a, T>where
T: ?Sized,
impl<'a, T> RefUnwindSafe for RwLockWriteGuard<'a, T>where
T: RefUnwindSafe + ?Sized,
impl<'a, T> Unpin for RwLockWriteGuard<'a, T>where
T: ?Sized,
impl<'a, T> !UnwindSafe for RwLockWriteGuard<'a, T>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> ByteSized for T
impl<T> ByteSized for T
Source§const BYTE_ALIGN: usize = _
const BYTE_ALIGN: usize = _
Source§fn byte_align(&self) -> usize
fn byte_align(&self) -> usize
Source§fn ptr_size_ratio(&self) -> [usize; 2]
fn ptr_size_ratio(&self) -> [usize; 2]
Source§impl<T, R> Chain<R> for Twhere
T: ?Sized,
impl<T, R> Chain<R> for Twhere
T: ?Sized,
Source§impl<T> ExtAny for T
impl<T> ExtAny for T
Source§fn type_hash_with<H: Hasher>(&self, hasher: H) -> u64
fn type_hash_with<H: Hasher>(&self, hasher: H) -> u64
TypeId of Self using a custom hasher.Source§fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
fn as_any_mut(&mut self) -> &mut dyn Anywhere
Self: Sized,
Source§impl<T> ExtMem for Twhere
T: ?Sized,
impl<T> ExtMem for Twhere
T: ?Sized,
Source§const NEEDS_DROP: bool = _
const NEEDS_DROP: bool = _
Source§fn mem_align_of<T>() -> usize
fn mem_align_of<T>() -> usize
Source§fn mem_align_of_val(&self) -> usize
fn mem_align_of_val(&self) -> usize
Source§fn mem_size_of<T>() -> usize
fn mem_size_of<T>() -> usize
Source§fn mem_size_of_val(&self) -> usize
fn mem_size_of_val(&self) -> usize
Source§fn mem_needs_drop(&self) -> bool
fn mem_needs_drop(&self) -> bool
true if dropping values of this type matters. Read moreSource§fn mem_forget(self)where
Self: Sized,
fn mem_forget(self)where
Self: Sized,
self without running its destructor. Read moreSource§fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
fn mem_replace(&mut self, other: Self) -> Selfwhere
Self: Sized,
Source§unsafe fn mem_zeroed<T>() -> T
unsafe fn mem_zeroed<T>() -> T
unsafe_layout only.T represented by the all-zero byte-pattern. Read moreSource§unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe fn mem_transmute_copy<Src, Dst>(src: &Src) -> Dst
unsafe_layout only.T represented by the all-zero byte-pattern. Read moreSource§fn mem_as_bytes(&self) -> &[u8] ⓘ
fn mem_as_bytes(&self) -> &[u8] ⓘ
unsafe_slice only.§impl<S> FromSample<S> for S
impl<S> FromSample<S> for S
fn from_sample_(s: S) -> S
Source§impl<T> Hook for T
impl<T> Hook for T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more§impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
impl<F, T> IntoSample<T> for Fwhere
T: FromSample<F>,
fn into_sample(self) -> T
§impl<T> Pointable for T
impl<T> Pointable for T
§impl<T, U> ToSample<U> for Twhere
U: FromSample<T>,
impl<T, U> ToSample<U> for Twhere
U: FromSample<T>,
fn to_sample_(self) -> U
Source§impl<R> TryRngCore for R
impl<R> TryRngCore for R
Source§type Error = Infallible
type Error = Infallible
Source§fn try_next_u32(&mut self) -> Result<u32, <R as TryRngCore>::Error> ⓘ
fn try_next_u32(&mut self) -> Result<u32, <R as TryRngCore>::Error> ⓘ
u32.Source§fn try_next_u64(&mut self) -> Result<u64, <R as TryRngCore>::Error> ⓘ
fn try_next_u64(&mut self) -> Result<u64, <R as TryRngCore>::Error> ⓘ
u64.Source§fn try_fill_bytes(
&mut self,
dst: &mut [u8],
) -> Result<(), <R as TryRngCore>::Error> ⓘ
fn try_fill_bytes( &mut self, dst: &mut [u8], ) -> Result<(), <R as TryRngCore>::Error> ⓘ
dest entirely with random data.Source§fn unwrap_mut(&mut self) -> UnwrapMut<'_, Self>
fn unwrap_mut(&mut self) -> UnwrapMut<'_, Self>
UnwrapMut wrapper.