Compiling threads-
<pthread.h>
clang -pthread x.c
Thread functions
1. int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine) (void*), void* arg);
EXAMPLE :
pthread_create (&threads[i], NULL, thread_function, NULL);
Starts a new thread while calling, starts execution by invoking “start_routine()”
Arg is passed as sole argument of start_routine
Terminates via
1. pthread_exit
2. Returns from start_routine, equivalent to calling pthread_exit
3. pthread_cancel
4. Threads call exit(), or main returned, this calls termination of all threads in process
If attr is NULL, thread is created with default attributes
Else, it points to a structure whose contents are used at thread creation time to determine attributes to new thread
Successful call stores ID of new thread in buffer pointed to by thread, used to refer to thread in subsequent calls to other pthreads functions
Successful call returns 0, on error returns error number, thread’s contents are undefined
2. int pthread_join(pthread_t thread, void ** retval)
EXAMPLE:
pthread_join(threads[x],NULL)
Awaits for the thread specified by thread to terminate and then return
Specified thread must be joinable
If retval not NULL, then pthread_join copies exit status of target thread into location pointed by retval
If thread canceled, PTHREAD_CANCELED is placed in location pointed to by retval
If multiple thread simultaneously try to join with same thread, results undefined
If the thread calling pthread_join is canceled, target thread will remain joinable
Returns upon success, else returns error number
Race condition: multiple threads read and write a shared data item and final result depends on relative timing (which ends later)
Critical section: two or more code parts that access and manipulate shared data (shared resource)
To prevent races, avoid 2 threads concurrently execute within critical section(mutual exclusion)
Syncronize: to make things happen at the same time
Mutexes
Tran.互斥体
Used to syncronize access to shared global variables (shared between threads)
EXAMPLE:
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
for()
pthread_mutex_lock(&lock);
//critical section
pthread_mutex_unlock(&lock);
Before threads enter critical section, mutex locked
When thread leaves, unlocks mutex
Mutex ensures one thread is allowed into critical section at once
All other threads have to wait until critical section is free
The first line was a static declaration, standard way of doing so
Mutexes have attributes, initially free (unlocked)
When locked, other threads that call pthread_mutex_lock are blocked
But will not return before calling thread was given lock
Kept in waiting queue, have to wait until cs unlocked
No fairness is guaranteed by pthread library
pthread_mutex_trylock(lock)
Attempts to acquire lock, if lock not free, returns immediately
If free, calling thread acquires
On success returns 0
If thread unlocking other thread’s mutex, undefined behavior, only unlock if success call
pthread_mutex_t can be used to dynamically create mutexes
pthread_mutex_init used to initialize
EXAMPLE:
lock = (pthread_mutex_t) malloc (sizeof(pthread_mutex_t);
pthread_mutex)_init(lock,NULL)
NULL uses default settings
pthread_mutex_destroy() destroys mutex, memory must be freed after destroy
Monitor
ADT + Synchronization
Hiding variable from direct access by the programmer
Access via functions
Locking responsibility now shifted from user of variable to maintainer of variable
Serialization
All other threads waiting until critical section becomes free
Threads execute critical section one after the other, no parallelism
Unshared data can be placed outside critical section to reduce serialization, more parallelism
Multiple mutexes
Threads may have to acquire more than one mutex
Eg. Splitting counter value between node S and its predecessor P
Both nodes have to be locked - else race
Splitting in pseudo //int ctr for counter
1- lock(pPtr->lock)
2- lock(sPtr->lock)
3- pPtr->ctr += sPtr->ctr/2
4- sPtr->ctr += sPtr->ctr/2
5- unlock(pPtr->lock)
6- unlock(sPtr->lock)
Deadlock
Each thread is waiting for resources, but all resources are held to system locks, no progress possible
Self-deadlock: thread attempting to acquire mutex it already has
EXAMPLE:
pthread_mutex_lock(&lock)
pthread_mutex_lock(&lock) //executing twice
Code above would wait for lock to be available again
ABBA-Deadlock
2 threads attempting to acquire A and B, t1 gets in order A->B, t2 in order B->A
Multiple mutexes must always be obtained in same order
Or else may deadlock
Conditions for deadlock
Mutual exclusion
Resource assigned to at most one thread
Eg. One chopstick is a resource, assigned to at most one person at a time
Hold and wait
Threads both hold resources and request other resources
Eg. Person holds left chopstick, wants right chopstick
No preemption
Resource only be released by thread that holds it
Eg. No way of forcibly taking chopstick away from person
Circular wait
Cycle exists in which each thread waits for a resource that is assigned to another thread
Preventing deadlock
Prevent cycles via locking hierarchy
1. Impose ordering on mutexes
a) One thread acquiring that mutex at a time, others blocked
b) Blocked thread only continues when other mutexes released
c) can use waiting between threads or splitting of subsets
2. Require all threads acquire mutexes in same order
Need to know what mutexes are needed for thread
Order of unlock is not important
If infinite loop entered, thread will also stop and wait forever even with locking hierarchy
Ps may be ignored depending on algorithm, pick up the smaller index then the higher for complete cycle
Synchronization via locks - lock based synchronization
Lock contention: lock currently in use, another thread attempts to acquire
If lock is highly contended, many threads waits to for acquiring
Locks serialize activities, highly contended lock limits parallelism
Scalability: measurement of how well system can be expanded (?)
Highly contended lock limits scalability
Worst to Better
Coarse-grained lock ---- Medium grained lock --- Fine grained lock
Barrier
A synchronization point which a certain number of threads must arrive before any participating thread can continue
Participating threads call pthread_barrier_wait()
Threads block until specified number of threads called pthread_barrier_wait()
int pthread_barrier_init(barrier* bar, const barrier attribute, unsigned count)
NULL for default attributes
Count specifies number of threads needed to open the barrier
int pthread_barrier_wait(barrier* bar)
int pthread_barrier_destroy(barrier* bar)
Semaphores
sem_t s;
Non negative integer synchronization variables
--sem_wait(&s);
--sem_post(&s);
--sem_init(&s,0,1);
Basic operations on semaphore s
P(s): [while(s==0) wait();
s--;] //只要s还是0,等待成为非0,非0后减一
--A thread has to wait until value of s is greater than 0, then s is decremented by 1 and thread is allowed to continue execution
V(s): [s++;] //s加一
--increment s
Guaranteed the statements between brackets are executed indivisibly
Statements between brackets are atomic operation
-at any time, only one of P or V can modify s
Semaphore variant: (s>=0)
Can be used to synchronize threads
Semaphore vs Mutexes
- Mutex provides mutual exclusion similar to binary semaphore (semaphore init = 1)
-one thread at a time to execute critical section
-Mutexes are less flexible than semaphores
-locked mutex has owner, semaphores dont
-mutex initially unlocked, semaphore can be anything including 0 (to say locked)
-mutex can assume 2 states, unlocked and locked, can be represented by 1 bit
-semaphore is a counter
Consumer & producer
Consumer waits for producer to give data to queue, vice versa
Only consume when queue is not null
Only give data when not full
Comments
Post a Comment