Eight Edition (Solutions to the exercises in the text are available only to instructors.) to solve the practice exercises on their own, and later use the solutions to check their own solutions. pdf. Nov 15, 2. Operating-System Structures. Operating Systems Concepts 8th Edition Solution Manual. 1 / 7 Operating System Concepts 8th Edition PDF Download Free Operating System Concepts . Edition Solutions [PDF] [EPUB] Welcome to the Web Page supporting Operating System Concepts with Java, 8th Edition Pdf, epub, docx and.
|Language:||English, Spanish, Japanese|
|Genre:||Children & Youth|
|ePub File Size:||20.74 MB|
|PDF File Size:||8.26 MB|
|Distribution:||Free* [*Regsitration Required]|
As we wrote the Eighth Edition of Operating System Concepts, we were guided by more practice exercises for students and included solutions in WileyPLUS. edition pdf download operating download operating systems concepts, concepts solution manual 8th pdf gratuit this is to find out the quality of often the. concepts 8th edition pdf download operating download operating systems concepts, by silberschatz, galvin, gagne, 7th edition download solution manual.
Bookmark it to easily review again before an exam. File-system manipulation. For what purpose would such a scheme be useful? You bet! Practice Exercises 41 b. The three main puropses are:
Control over when interrupts could be enabled or disabled is also possible only when the CPU is in kernel mode. Consequently, the CPU has very limited capability when executing in user mode, thereby enforcing protection of critical resources.
Set value of timer. Read the clock. Clear memory. Issue a trap instruction. Turn off interrupts. Modify entries in device-status table. Switch from user to kernel mode. The following operations need to be privileged: The rest can be performed in user mode. Practice Exercises 3 The data required by the operating system passwords, access controls, accounting information, and so on would have to be stored in or passed through unprotected memory and thus be accessible to unauthorized users.
What are two possible uses of these multiple modes? Although most systems only distinguish between user and kernel modes, some CPUs have supported multiple modes. For example, rather than distinguishing between just user and kernel mode, you could distinguish between different types of user mode. When the machine was in this mode, a member of the group could run code belonging to anyone else in the group.
Another possibility would be to provide different distinctions within kernel code. Provide a short description of how this could be accomplished. A program could use the following approach to compute the current time using timer interrupts.
The program could set a timer for some time in the future and go to sleep. When it is awakened by the interrupt, it could update its local state, which it is using to keep track of the number of interrupts it has received thus far. It could then repeat this process of continually setting timer interrupts and updating its local state when the interrupts are actually raised.
What problems do they solve? What problems do they cause? If a cache can be made as large as the device for which it is caching for instance, a cache as large as a disk , why not make it that large and eliminate the device?
Caches are useful when two or more components need to exchange data, and the components perform transfers at differing speeds. Caches solve the transfer problem by providing a buffer of intermediate speed between the components. The data in the cache must be kept consistent with the data in the components. If a component has a data value change, and the datum is also in the cache, the cache must also be updated.
This is especially a problem on multiprocessor systems where more than one process may be accessing a datum. A component may be eliminated by an equal-sized cache, but only if: Under this model, the client requests services that are provided by the server.
In fact, all nodes in the system are considered peers and thus may act as either clients or servers—or both. A node may request a service from another peer, or the node may in fact provide such a service to other peers in the system. Under the client-server model, all recipes are stored with the server. The node or perhaps nodes with the requested recipe could provide it to the requesting node.
Notice how each peer may act as both a client it may request recipes and as a server it may provide recipes. System calls allow user-level processes to request services of the operat- ing system. The creation and deletion of both user and system processes b. The suspension and resumption of processes c.
The provision of mechanisms for process synchronization d. The provision of mechanisms for process communication e. The provision of mechanisms for deadlock handling 2.
The three major activities are: Keep track of which parts of memory are currently being used and by whom. Decide which processes are to be loaded into memory when memory space becomes available. Allocate and deallocate memory space as needed. Why is it usually separate from the kernel? It is usually not part of the kernel since the command interpreter is subject to changes. In Unix systems, a fork system call followed by an exec system call need to be performed to start a new process.
The fork call clones the currently executing process, while the exec call overlays a new process based on a different executable over the calling process.
System programs can be thought of as bundles of useful system calls. They provide basic functionality to users so that users do not need to write their own programs to solve common problems.
What are the disadvantages of using the layered approach? As in all cases of modular design, designing an operating system in a modular way has several advantages. The system is easier to debug and modify because changes affect only limited sections of the system rather than touching all sections of the operating system.
In which cases would it be impossible for user-level programs to provide these services? Explain your answer. Program execution. A user-level program could not be trusted to properly allocate CPU time. Disks, tapes, serial lines, and other devices must be communicated with at a very low level.
Practice Exercises 7 should have access to and to access them only when they are otherwise unused. File-system manipulation. Message passing between systems requires messages to be turned into packets of information, sent to the network controller, transmitted across a communications medium, and reassembled by the destination system.
Packet ordering and data correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets destined for other processes. Error detection. Error detection occurs at both the hardware and software levels. At the hardware level, all data transfers must be inspected to ensure that data have not been corrupted in transit. All data on media must be checked to be sure they have not changed since they were written to the media.
At the software level, media must be checked for data consistency; for instance, whether the number of allocated and unallocated blocks of storage match the total number on the device. There, errors are frequently process- independent for instance, the corruption of data on a disk , so there must be a global program the operating system that handles all types of errors.
Also, by having errors processed by the operating system, processes need not contain code to catch and correct all the errors possible on a system.
What would the bootstrap program need to do? Consider a system that would like to run both Windows XP and three different distributions of Linux e. Each operating system will be stored on disk. During system boot-up, a special program which we will call the boot manager will determine which operating system to boot into. It is this boot manager that is responsible for determining which system to boot into. Boot managers often provide the user with a selection of systems to boot into; boot managers are also typically designed to boot into a default operating system if no choice is selected by the user.
The result is still 5 as the child updates its copy of value. When control returns to the parent, its value remains at 5. There are 8 processes created. Discuss three major complications that concurrent processing adds to an operating system. FILL 3. Describe what happens when a context switch occurs if the new context is already loaded into one of the register sets.
What happens if the new context is in memory rather than in a register set and all the register sets are in use? The CPU current-register-set pointer is changed to point to the set containing the new context, which takes very little time.
If the context is in memory, one of the contexts in a register set must be chosen and be moved to memory, and the new context must be loaded from memory into the set. This process takes a little more time than on systems with one set of registers, depending on how a replacement victim is selected. Heap c. Shared memory segments Answer: Only the shared memory segments are shared between the parent process and the newly forked child process. Copies of the stack and the heap are made for the newly created process.
Does the algorithm for implementing this semantic execute correctly even if the ACK message back to the client is lost due to a network problem? The general algorithm for ensuring this combines an acknowledgment ACK scheme combined with timestamps or some other incremental counter that allows the server to distinguish between duplicate messages.
The general strategy is for the client to send the RPC to the server along with a timestamp. The client will also start a timeout clock. The client will then wait for one of two occurrences: If the client times out, it assumes the server was unable to perform the remote procedure so the client invokes the RPC a second time, sending a later timestamp. The client may not receive the ACK for one of two reasons: In situation 2 , the server will receive a duplicate RPC and it will use the timestamp to identify it as a duplicate so as not to perform the RPC a second time.
It is important to note that the server must send a second ACK back to the client to inform the client the RPC has been performed. The server should keep track in stable storage such as a disk log information regarding what RPC operations were received, whether they were successfully performed, and the results associated with the operations.
A Web server that services each request in a separate thread. A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. An interactive GUI program such as a debugger where a thread is used to monitor user input, another thread represents the running application, and a third thread monitors performance.
Under what circumstances is one type better than the other? User-level threads are unknown by the kernel, whereas the kernel is aware of kernel threads.
On systems using either M: N mapping, user threads are scheduled by the thread library and the kernel schedules kernel threads. Kernel threads need not be associated with a process whereas every user thread belongs to a process. Kernel threads are generally more expensive to maintain than user threads as they must be represented with a kernel data structure. Context switching between kernel threads typically requires saving the value of the CPU registers from the thread being switched out and restoring the CPU registers of the new thread being scheduled.
How do they differ from those used when a process is created? Because a thread is smaller than a process, thread creation typically uses fewer resources than process creation. Creating a process requires allocating a process control block PCB , a rather large data structure. Allocating and managing the memory map is typically the most time-consuming activity.
Creating either a user or kernel thread involves allocating a small data structure to hold a register set, stack, and priority. Furthermore, the system allows developers to create real-time threads for use in real-time systems. Is it necessary to bind a real-time thread to an LWP? Timing is crucial to real-time applications.
If a thread is marked as real-time but is not bound to an LWP, the thread may have to wait to be attached to an LWP before running. Consider if a real-time thread is running is attached to an LWP and then proceeds to block i.
While the real-time thread is blocked, the LWP it was attached to has been assigned to another thread. By binding an LWP to a real-time thread you are ensuring the thread will be able to run with minimal delay once it is scheduled.
Explain why this can occur and how such effects can be minimized. The system clock is updated at every clock interrupt. If interrupts were disabled—particularly for a long period of time—it is possible the system clock could easily lose the correct time. The system clock is also used for scheduling purposes.
For example, the time quantum for a process is expressed as a number of clock ticks. At every clock interrupt, the scheduler determines if the time quantum for the currently running process has expired. If clock interrupts were disabled, the scheduler could not accurately assign time quantums. This effect can be minimized by disabling clock interrupts for only very short periods. Describe the circumstances under which they use spin- locks, mutex locks, semaphores, adaptive mutex locks, and condition variables.
In each case, explain why the mechanism is needed.
Spinlocks are useful for multiprocessor systems where a thread can run in a busy-loop for a short period of time rather than incurring the overhead of being put in a sleep queue.
Mutexes are useful for locking resources. Solaris 2 uses adaptive mutexes, meaning that the mutex is implemented with a spin lock on multiprocessor machines. What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Alternatively, a process could wait by relinquishing the processor, and block on a condition and wait to be awakened at some appropriate time in the future.
Busy waiting can be avoided but incurs the overhead associated with putting a process to sleep and having to wake it up when the appropriate program state is reached. Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock can be obtained only by executing a different process.
A wait operation atomically decrements the value associated with a semaphore. If two wait operations are executed on a semaphore when its value is 1, if the two operations are not performed atomically, then it is possible that both operations might proceed to decrement the semaphore value, thereby violating mutual exclusion.
The n processes share a semaphore, mutex, initialized to 1. Each process Pi is organized as follows: Given n processes to be scheduled on one processor, how many different schedules are possible? Give a formula in terms of n. Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Each process will run for the amount of time listed. In answering the questions, use nonpreemptive scheduling, and base all decisions on the information you have at the time the decision must be made.
What is the average turnaround time for these processes with the FCFS scheduling algorithm? What is the average turnaround time for these processes with the SJF scheduling algorithm? Remember that processes P1 and P2 are waiting during this idle time, so their waiting time may increase. This algorithm could be known as future-knowledge scheduling. FCFS is 11 if you forget to subtract arrival time. Processes that need more frequent servicing, for instance, interactive processes such as editors, can be in a queue with a small time quantum.
For example, the RR algorithm requires a parameter to indicate the time slice. These algorithms are thus really sets of algorithms for example, the set of RR algorithms for all time slices, and so on. What if any relation holds between the following pairs of algorithm sets? Priority and SJF b. Multilevel feedback queues and FCFS c. Priority and FCFS d. The shortest job has the highest priority.
FCFS gives the highest priority to the job having been in existence the longest. Practice Exercises 17 6. PCS scheduling is done local to the process. It is how the thread library schedules threads onto available LWPs. SCS scheduling is the situation where the operating system schedules kernel threads. On systems using either many-to-one or many-to-many, the two scheduling models are fundamentally different.
Furthermore, the system allows program developers to create real-time threads. Yes, otherwise a user thread may have to compete for an available LWP prior to being actually scheduled.
By binding the user thread to an LWP, there is no latency while waiting for an available LWP; the real-time user thread can be scheduled immediately.
The scheduler recalculates process priorities once per second using the following function: What will be the new priorities for these three processes when priorities are recalculated? The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.
Show that it is possible for the processes to complete their execution without entering a deadlock state. An unsafe state may not necessarily lead to deadlock, it just means that we cannot guarantee that deadlock will not occur.
Thus, it is possible that a system in an unsafe state may still allow all processes to complete without deadlock occurring. Consider the situation where a system has 12 resources allocated among processes P0, P1, and P2. The resources are allocated according to the following policy: This system is in an unsafe state as process P1 could complete, thereby freeing a total of four resources. But we cannot guarantee that processes P0 and P2 can complete. However, it is possible that a process may release resources before requesting any further.
What is the content of the matrix Need? Is the system in a safe state? If a request from process P1 arrives for 0,4,2,0 , can the request be granted immediately?
The values of Need for processes P0 through P4 respectively are 0, 0, 0, 0 , 0, 7, 5, 0 , 1, 0, 0, 2 , 0, 0, 2, 0 , and 0, 6, 4, 2. The system is in a safe state? With Available being equal to 1, 5, 2, 0 , either process P0 or P3 could run. Once process P3 runs, it releases its resources, which allow all other existing processes to run.
The request can be granted immediately? This results in the value of Available being 1, 1, 0, 0. Such synchronization objects may include mutexes, semaphores, condition variables, and the like. We can prevent the deadlock by adding a sixth object F. This solution is known as containment: Compare this scheme with the circular-wait scheme of Section 7.
This is probably not a good solution because it yields too large a scope.
Figure 7. As can be seen, the nested outer loops—both of which loop through n times—provide the n2 performance. Within these outer loops are two sequential inner loops which loop m times. Deadlocks occur about twice per month, and the operator must terminate and rerun about 10 jobs per deadlock. Since the machine currently has percent idle time, all 5, jobs per month could still be run, although turnaround time would increase by about 20 percent on average.
What are the arguments for installing the deadlock-avoidance algorithm? What are the arguments against installing the deadlock-avoidance algorithm? An argument for installing deadlock avoidance in the system is that we could ensure deadlock would never occur. In addition, despite the increase in turnaround time, all 5, jobs could still run.
An argument against installing deadlock avoidance software is that deadlocks occur infrequently and they cost little when they do occur. When a process requests a resource, a timer is started. If the elapsed time exceeds T, then the process is considered to be starved. One strategy for dealing with starvation would be to adopt a policy where resources are assigned only to the process that has been waiting the longest.
Another strategy would be less strict than what was just mentioned. In this scenario, a resource might be granted to a process that has waited less than another process, providing that the other process is not starving. Requests for and releases of resources are allowed at any time. If a blocked process has the desired resources, then these resources are taken away from it and are given to the requesting process.
The vector of resources for which the blocked process is waiting is increased to include the resources that were taken away. For example, consider a system with three resource types and the vector Available initialized to 4,2,2. If process P0 asks for 2,2,1 , it gets them. If P1 asks for 1,0,1 , it gets them. Then, if P0 asks for 0,0,1 , it is blocked resource not available. If P2 now asks for 2,0,0 , it gets the available one 1,0,0 and one that was allocated to P0 since P0 is blocked.
Can deadlock occur? Deadlock cannot occur because preemption exists. A process may never acquire all the resources it needs if they are continuously preempted by a series of requests such as those of process C. Practice Exercises 23 7. The Max vector represents the maximum request a process may make. When calculating the safety algorithm we use the Need matrix, which represents Max — Allocation.
This follows directly from the hold-and-wait condition. A logical address does not refer to an actual existing address; rather, it refers to an abstract address in an abstract address space. Contrast this with a physical address that refers to an actual physical address in memory. A logical address is generated by the CPU and is translated into a physical address by the memory management unit MMU.
Therefore, physical addresses are generated by the MMU. The CPU knows whether it wants an instruction instruction fetch or data data fetch or store. Therefore, two base— limit register pairs are provided: The instruction base—limit register pair is automatically read-only, so programs can be shared among different users.
Discuss the advantages and disadvantages of this scheme. The major advantage of this scheme is that it is an effective mechanism for code and data sharing. For example, only one copy of an editor or a compiler needs to be kept in memory, and this code can be shared by all processes needing access to the editor or compiler code.
The only disadvantage is that the code and data must be separated, which is usually adhered to in a compiler-generated code. Recall that paging is implemented by breaking up an address into a page and offset number. Because each bit position represents a power of 2, splitting an address between bits results in a page size that is a power of 2. How many bits are there in the logical address? How many bits are there in the physical address? Logical address: Physical address: Explain how this effect could be used to decrease the amount of time needed to copy a large amount of memory from one place to another.
What effect would updating some byte on the one page have on the other page? By allowing two entries in a page table to point to the same page frame in memory, users can share code and data. If the code is reentrant, much memory space can be saved through the shared use of large programs such as text editors, compilers, and database systems. Since segment tables are a collection of base—limit registers, segments can be shared when entries in the segment table of two different jobs point to the same physical location.
The two segment tables must have identical base pointers, and the shared segment number must be the same in the two processes. Describe a paging scheme that allows pages to be shared without requiring that the page numbers be the same. Both of these problems reduce to a program being able to reference both its own code and its data without knowing the segment or page number associated with the address.
One register had the address of the current program segment, another had a base address for the stack, another had a base address for the global data, and so on.
Practice Exercises 27 that all references have to be indirect through a register that maps to the current segment or page number. By changing these registers, the same code can execute for different processes without the same page or segment numbers. A key is a 4-bit quantity. Each 2K block of memory has a key the storage key associated with it. The CPU also has a key the protection key associated with it.
A store operation is allowed only if both keys are equal, or if either is zero. Which of the following memory-management schemes could be used successfully with this hardware? Bare machine b. Single-user system c. Multiprogramming with a variable number of processes e. Paging f. Segmentation Answer: Why buy extra books when you can get all the homework help you need in one place? You bet!
Just post a question you need help with, and one of our experts will provide a custom solution. You can also find solutions immediately by searching the millions of fully answered study questions in our archive. You can download our homework help app on iOS or Android to access solutions manuals on your mobile device. Asking a study question in a snap - just take a pic.