Baldis basics mod download

Operating system concepts 10th edition pdf free download

operating-system-concepts-10th,about the author of Operating System Concepts 10th Edition Pdf Free Download

29/07/ · Operating System Concepts 10th: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne: Free Download, Borrow, and Streaming: Internet Archive Operating System Concepts 10th by Abraham Silberschatz, Peter 18/07/ · operating-system-conceptsth: Free Download, Borrow, and Streaming: Internet Archive Loading viewer Favorite Share operating-system-conceptsth Topics Inside operating system concepts 10th edition pdf free download, you’ll find: Writing principles: Cause-effect and top-down approaches to problem solving Understanding the relationships 5/03/ · You will find Operating System Concepts, 10th edition PDF which can be downloaded for FREE on this page. Operating System Concepts, 10th edition is useful when preparing for Free download Operating System Concepts 10th Edition by Abraham Silberschatz, Peter B. Galvin and Greg Gagne. Published by Wiley. English | Pages | PDF | ISBN: ... read more

John S. Silberschatz, Paul M. Galvin, and Ian A. Bollinger have created a text that reflects the state of computer technology as of , while also showing how technological concepts work in theory and in practice. operating system concepts 10th edition pdf free download offers a wealth of examples to reinforce key concepts and provides extensive coverage on computing and the Internet. It is written specifically for the student so each chapter demonstrates:. Key concepts are reinforced in this global edition through instruction, chapter practice exercises, homework exercises, and suggested readings. Students also receive an understanding how to apply the content. The book provides example programs written in C and Java for use in programming environments.

Avi Silberschatz was born in Haifa, Israel. He graduated in with a Ph. in Computer Science from the State University of New York SUNY at Stony Brook. He became the Sidney J. Weinberg Professor of Computer Science at Yale University, USA in He was the chair of the Computer Science department at Yale from to Prior to coming to Yale in , he was the Vice President of the Information Sciences Research Center at Bell Labs. He previously held an endowed professorship at the University of Texas at Austin, where he taught until His research interests include database systems, operating systems, storage systems, and network management. Silberschatz was elected an ACM Fellow in and received the Karl V. Karlstrom Outstanding Educator Award in He was elected an IEEE fellow in and received the IEEE IEEE Taylor L. He was elected an AAAS fellow in Silberschatz is a member of the Connecticut Academy of Science and Engineering.

Rasit ˘ Eskicioglu, Hans Flack, Robert Fowler, G. Scott Graham, Richard Guy, Max Hailperin, Rebecca Hartman, Wayne Hathaway, Christopher Haynes, Don Heller, Bruce Hillyer, Mark Holliday, Dean Hougen, Michael Huang, Ahmed Kamel, Morty Kewstel, Richard Kieburtz, Carol Kroll, Morty Kwestel, Thomas LeBlanc, John Leggett, Jerrold Leichter, Ted Leung, Gary Lippman, Carolyn Miller, Michael Molloy, Euripides Montagne, Yoichi ¨ Muraoka, Jim M. Ng, Banu Ozden, Ed Posnak, Boris Putanec, Charles xviii Preface Qualline, John Quarterman, Mike Reiter, Gustavo Rodriguez-Rivera, Carolyn J. Schauble, Thomas P. Skinner, Yannis Smaragdakis, Jesse St. Laurent, John Stankovic, Adam Stauffer, Steven Stepanek, John Sterling, Hal Stern, Louis Stevens, Pete Thomas, David Umbaugh, Steve Vinoski, Tommy Wagner, Larry L.

Wear, John Werth, James M. Westall, J. Richard West provided input into Chapter Salahuddin Khan updated Section Some of the slides were prepared by Marilyn Turnamian. Book Production The Executive Editor was Don Fowley. The Senior Production Editor was Ken Santor. The Freelance Developmental Editor was Chris Nelson. The Assistant Developmental Editor was Ryann Dannelly. The cover designer was Tom Nery. The copyeditor was Beverly Peavler. The freelance proofreader was Katrina Avery. The freelance indexer was WordCo, Inc. The Aptara LaTex team consisted of Neeraj Saxena and Lav kush. Personal Notes Avi would like to acknowledge Valerie for her love, patience, and support during the revision of this book. Preface xix Peter would like to thank his wife Carla and his children, Gwen, Owen, and Maddie.

Greg would like to acknowledge the continued support of his family: his wife Pat and sons Thomas and Jay. Abraham Silberschatz, New Haven, CT Peter Baer Galvin, Boston, MA Greg Gagne, Salt Lake City, UT Contents PART ONE Chapter 1 1. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner. An operating system is software that manages the computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent programs from interfering with the proper operation of the system. Internally, operating systems vary greatly in their makeup, since they are organized along many different lines. The design of a new operating system is a major task, and it is important that the goals of the system be well defined before the design begins.

Because an operating system is large and complex, it must be created piece by piece. Each of these pieces should be a well-delineated portion of the system, with carefully defined inputs, outputs, and functions. It also provides a basis for application programs and acts as an intermediary between the computer user and the computer hardware. An amazing aspect of operating systems is how they vary in accomplishing these tasks in a wide variety of computing environments. In order to explore the role of an operating system in a modern computing environment, it is important first to understand the organization and architecture of computer hardware. A fundamental responsibility of an operating system is to allocate these resources to programs.

In this chapter, we provide a general overview of the major components of a contemporary computer system as well as the functions provided by the operating system. Additionally, we cover several topics to help set the stage for the remainder of the text: data structures used in operating systems, computing environments, and open-source and free operating systems. A computer system can be divided roughly into four components: the hardware, the operating system, the application programs, and a user Figure 1. The operating system controls the hardware and coordinates its use among the various application programs for the various users. We can also view a computer system as consisting of hardware, software, and data. The operating system provides the means for proper use of these resources in the operation of the computer system. An operating system is similar to a government.

Like a government, it performs no useful function by itself. It simply provides an environment within which other programs can do useful work. Many computer users sit with a laptop or in front of a PC consisting of a monitor, keyboard, and mouse. Such a system is designed for one user to monopolize its resources. The goal is to maximize the work or play that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and security and none paid to resource utilization —how various hardware and software resources are shared. user application programs compilers, web browsers, development kits, etc. Figure 1. These devices are typically connected to networks through cellular or other wireless technologies. The user interface for mobile computers generally features a touch screen, where the user interacts with the system by pressing and swiping fingers across the screen rather than using a physical keyboard and mouse.

Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypads and may turn indicator lights on or off to show status, but they and their operating systems and applications are designed primarily to run without user intervention. In this context, we can view an operating system as a resource allocator. The operating system acts as the manager of these resources. Facing numerous and possibly conflicting requests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficiently and fairly.

An operating system is a control program. A control program manages the execution of user programs to prevent errors and improper use of the computer. That is the case, at least in part, because of the myriad designs and uses of computers. Computers are present within toasters, cars, ships, spacecraft, homes, and businesses. They are the basis for game machines, cable TV tuners, and industrial control systems. To explain this diversity, we can turn to the history of computers. Although computers have a relatively short history, they have evolved rapidly. Computing started as an experiment to determine what could be done and quickly moved to fixed-purpose systems for military uses, such as code breaking and trajectory plotting, and governmental uses, such as census calculation.

Computers gained in functionality and shrank in size, leading to a vast number of uses and a vast number and variety of operating systems. See Appendix A for more details on the history of operating systems. How, then, can we define what an operating system is? In general, we have no completely adequate definition of an operating system. Operating systems 6 Chapter 1 Introduction exist because they offer a reasonable way to solve the problem of creating a usable computing system. The fundamental goal of computer systems is to execute programs and to make solving user problems easier.

Computer hardware is constructed toward this goal. Since bare hardware alone is not particularly easy to use, application programs are developed. The common functions of controlling and allocating resources are then brought together into one piece of software: the operating system. In addition, we have no universally accepted definition of what is part of the operating system. Some systems take up less than a megabyte of space and lack even a full-screen editor, whereas others require gigabytes of space and are based entirely on graphical windowing systems. A more common definition, and the one that we usually follow, is that the operating system is the one program running at all times on the computer—usually called the kernel. Along with the kernel, there are two other types of programs: system programs, which are associated with the operating system but are not necessarily part of the kernel, and application programs, which include all programs not associated with the operation of the system.

The matter of what constitutes an operating system became increasingly important as personal computers became more widespread and operating systems grew increasingly sophisticated. In , the United States Department of Justice filed suit against Microsoft, in essence claiming that Microsoft included too much functionality in its operating systems and thus prevented application vendors from competing. As a result, Microsoft was found guilty of using its operating-system monopoly to limit competition. Today, however, if we look at operating systems for mobile devices, we see that once again the number of features constituting the operating system is increasing. Mobile operating systems often include not only a core kernel but also middleware—a set of software frameworks that provide additional services to application developers.

Although there are many practitioners of computer science, only a small percentage of them will be involved in the creation or modification of an operating system. Why, then, study operating systems and how they work? Simply because, as almost all code runs on top of an operating system, knowledge of how operating systems work is crucial to proper, efficient, effective, and secure programming. Understanding the fundamentals of operating systems, how they drive computer hardware, and what they provide to applications is not only essential to those who program them but also highly useful to those who write programs on them and use them. In summary, for our purposes, the operating system includes the alwaysrunning kernel, middleware frameworks that ease application development and provide features, and system programs that aid in managing the system while it is running.

Most of this text is concerned with the kernel of generalpurpose operating systems, but other components are discussed as needed to fully explain operating system design and operation. Each device controller is in charge of a specific type of device for example, a disk drive, audio device, or graphics display. Depending on the controller, more than one device may be attached. For instance, one system USB port can connect to a USB hub, to which several devices can connect. A device controller maintains some local buffer storage and a set of special-purpose registers. The device controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage. Typically, operating systems have a device driver for each device controller. This device driver understands the device controller and provides the rest of the operating system with a uniform interface to the device.

The CPU and the device controllers can execute in parallel, competing for memory cycles. To ensure orderly access to the shared memory, a memory controller synchronizes access to the memory. In the following subsections, we describe some basics of how such a system operates, focusing on three key aspects of the system. We start with interrupts, which alert the CPU to events that require attention. mouse keyboard disks CPU printer monitor on-line disk controller USB controller system bus memory Figure 1. graphics adapter 8 Chapter 1 Introduction 1.

The controller starts the transfer of data from the device to its local buffer. Once the transfer of data is complete, the device controller informs the device driver that it has finished its operation. The device driver then gives control to other parts of the operating system, possibly returning the data or a pointer to the data if the operation was a read. But how does the controller inform the device driver that it has finished its operation? This is accomplished via an interrupt. There may be many buses within a computer system, but the system bus is the main communications path between the major components.

Interrupts are used for many other purposes as well and are a key part of how operating systems and hardware interact. When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location. The fixed location usually contains the starting address where the service routine for the interrupt is located. The interrupt service routine executes; on completion, the CPU resumes the interrupted computation. A timeline of this operation is shown in Figure 1. To run the animation assicated with this figure please click here. Interrupts are an important part of a computer architecture. Each computer design has its own interrupt mechanism, but several functions are common. The interrupt must transfer control to the appropriate interrupt service routine.

The straightforward method for managing this transfer would be to invoke a generic routine to examine the interrupt information. The routine, in turn, Figure 1. However, interrupts must be handled quickly, as they occur very frequently. A table of pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the table of pointers is stored in low memory the first hundred or so locations. These locations hold the addresses of the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is then indexed by a unique number, given with the interrupt request, to provide the address of the interrupt service routine for the interrupting device. Operating systems as different as Windows and UNIX dispatch interrupts in this manner. The interrupt architecture must also save the state information of whatever was interrupted, so that it can restore this information after servicing the interrupt.

If the interrupt routine needs to modify the processor state —for instance, by modifying register values—it must explicitly save the current state and then restore that state before returning. After the interrupt is serviced, the saved return address is loaded into the program counter, and the interrupted computation resumes as though the interrupt had not occurred. The CPU hardware has a wire called the interrupt-request line that the CPU senses after executing every instruction. When the CPU detects that a controller has asserted a signal on the interrupt-request line, it reads the interrupt number and jumps to the interrupt-handler routine by using that interrupt number as an index into the interrupt vector. It then starts execution at the address associated with that index.

The interrupt handler saves any state it will be changing during its operation, determines the cause of the interrupt, performs the necessary processing, performs a state restore, and executes a return from interrupt instruction to return the CPU to the execution state prior to the interrupt. We say that the device controller raises an interrupt by asserting a signal on the interrupt request line, the CPU catches the interrupt and dispatches it to the interrupt handler, and the handler clears the interrupt by servicing the device. The basic interrupt mechanism just described enables the CPU to respond to an asynchronous event, as when a device controller becomes ready for service.

In a modern operating system, however, we need more sophisticated interrupthandling features. We need the ability to defer interrupt handling during critical processing. We need an efficient way to dispatch to the proper interrupt handler for a device. We need multilevel interrupts, so that the operating system can distinguish between high- and low-priority interrupts and can respond with the appropriate degree of urgency. In modern computer hardware, these three features are provided by the CPU and the interrupt-controller hardware. Most CPUs have two interrupt request lines. One is the nonmaskable interrupt, which is reserved for events such as unrecoverable memory errors. The second interrupt line is maskable: it can be turned off by the CPU before the execution of critical instruction sequences that must not be interrupted.

The maskable interrupt is used by device controllers to request service. Recall that the purpose of a vectored interrupt mechanism is to reduce the need for a single interrupt handler to search all possible sources of interrupts to determine which one needs service. In practice, however, computers have more devices and, hence, interrupt handlers than they have address elements in the interrupt vector. A common way to solve this problem is to use interrupt chaining, in which each element in the interrupt vector points to the head of a list of interrupt handlers. When an interrupt is raised, the handlers on the corresponding list are called one by one, until one is found that can service the request. This structure is a compromise between the overhead of a huge interrupt table and the inefficiency of dispatching to a single interrupt handler. The events from 0 to 31, which are nonmaskable, are used to signal various error conditions.

The events from 32 to , which are maskable, are used for purposes such as device-generated interrupts. The interrupt mechanism also implements a system of interrupt priority levels. These levels enable the CPU to defer the handling of low-priority inter- 1. rupts without masking all interrupts and makes it possible for a high-priority interrupt to preempt the execution of a low-priority interrupt. In summary, interrupts are used throughout modern operating systems to handle asynchronous events and for other purposes we will discuss throughout the text. Device controllers and hardware faults raise interrupts.

To enable the most urgent work to be done first, modern computers use a system of interrupt priorities. Because interrupts are used so heavily for time-sensitive processing, efficient interrupt handling is required for good system performance. General-purpose computers run most of their programs from rewritable memory, called main memory also called random-access memory, or RAM. Main memory commonly is implemented in a semiconductor technology called dynamic random-access memory DRAM. Computers use other forms of memory as well. For example, the first program to run on computer power-on is a bootstrap program, which then loads the operating system.

Since RAM is volatile—loses its content when power is turned off or otherwise lost—we cannot trust it to hold the bootstrap program. Instead, for this and some other purposes, the computer uses electrically erasable programmable read-only memory EEPROM and other forms of firmwar —storage that is infrequently written to and is nonvolatile. A bit can contain one of two values, 0 and 1. All other storage in a computer is based on collections of bits. Given enough bits, it is amazing how many things a computer can represent: numbers, letters, images, movies, sounds, documents, and programs, to name a few. A byte is 8 bits, and on most computers it is the smallest convenient chunk of storage. A word is made up of one or more bytes. For example, a computer that has bit registers and bit memory addressing typically has bit 8-byte words. A computer executes many operations in its native word size rather than a byte at a time.

Computer storage, along with most computer throughput, is generally measured and manipulated in bytes and collections of bytes. A kilobyte, or KB, is 1, bytes; a megabyte, or MB, is 1, bytes; a gigabyte, or GB, is 1, bytes; a terabyte, or TB, is 1, bytes; and a petabyte, or PB, is 1, bytes. Computer manufacturers often round off these numbers and say that a megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking measurements are an exception to this general rule; they are given in bits because networks move data a bit at a time. can be changed but cannot be changed frequently.

For example, the iPhone uses EEPROM to store serial numbers and hardware information about the device. All forms of memory provide an array of bytes. Each byte has its own address. Interaction is achieved through a sequence of load or store instructions to specific memory addresses. The load instruction moves a byte or word from main memory to an internal register within the CPU, whereas the store instruction moves the content of a register to main memory. Aside from explicit loads and stores, the CPU automatically loads instructions from main memory for execution from the location stored in the program counter.

A typical instruction—execution cycle, as executed on a system with a von Neumann architecture, first fetches an instruction from memory and stores that instruction in the instruction register. The instruction is then decoded and may cause operands to be fetched from memory and stored in some internal register. After the instruction on the operands has been executed, the result may be stored back in memory. Notice that the memory unit sees only a stream of memory addresses. It does not know how they are generated by the instruction counter, indexing, indirection, literal addresses, or some other means or what they are for instructions or data. Accordingly, we can ignore how a memory address is generated by a program. We are interested only in the sequence of memory addresses generated by the running program. Ideally, we want the programs and data to reside in main memory permanently.

This arrangement usually is not possible on most systems for two reasons: 1. Main memory is usually too small to store all needed programs and data permanently. Main memory, as mentioned, is volatile —it loses its contents when power is turned off or otherwise lost. Thus, most computer systems provide secondary storage as an extension of main memory. The main requirement for secondary storage is that it be able to hold large quantities of data permanently. The most common secondary-storage devices are hard-disk drives HDDs and nonvolatile memory NVM devices, which provide storage for both programs and data. Most programs system and application are stored in secondary storage until they are loaded into memory. Many programs then use secondary storage as both the source and the destination of their processing. Secondary storage is also much slower than main memory. Hence, the proper management of secondary storage is of central importance to a computer system, as we discuss in Chapter In a larger sense, however, the storage structure that we have described —consisting of registers, main memory, and secondary storage—is only one of many possible storage system designs.

Other possible components include cache memory, CD-ROM or blu-ray, magnetic tapes, and so on. Those that are slow enough and large enough that they are used only for special purposes —to store backup copies of material stored on other devices, for example — are called tertiary storage. Each storage system provides the basic functions of storing a datum and holding that datum until it is retrieved at a later time. The main differences among the various storage systems lie in speed, size, and volatility. The wide variety of storage systems can be organized in a hierarchy Figure 1.

As a general rule, there is a storage capacity access time cache primary storage faster smaller registers volatile storage main memory nonvolatile storage nonvolatile memory secondary storage optical disk tertiary storage magnetic tapes Figure 1. slower larger hard-disk drives 14 Chapter 1 Introduction trade-off between size and speed, with smaller and faster memory closer to the CPU. As shown in the figure, in addition to differing in speed and capacity, the various storage systems are either volatile or nonvolatile. Volatile storage, as mentioned earlier, loses its contents when the power to the device is removed, so data must be written to nonvolatile storage for safekeeping.

The top four levels of memory in the figure are constructed using semiconductor memory, which consists of semiconductor-based electronic circuits. NVM devices, at the fourth level, have several variants but in general are faster than hard disks. The most common form of NVM device is flash memory, which is popular in mobile devices such as smartphones and tablets. Increasingly, flash memory is being used for long-term storage on laptops, desktops, and servers as well. Since storage plays an important role in operating-system structure, we will refer to it frequently in the text. If we need to emphasize a particular type of storage device for example, a register ,we will do so explicitly.

It will be referred to as NVS. The vast majority of the time we spend on NVS will be on secondary storage. A few examples of such storage systems are HDDs, optical disks, holographic storage, and magnetic tape. If we need to emphasize a particular type of mechanical storage device for example, magnetic tape , we will do so explicitly. A few examples of such storage systems are flash memory, FRAM, NRAM, and SSD. Electrical storage will be referred to as NVM. If we need to emphasize a particular type of electrical storage device for example, SSD , we will do so explicitly. Mechanical storage is generally larger and less expensive per byte than electrical storage. Conversely, electrical storage is typically costly, smaller, and faster than mechanical storage. The design of a complete storage system must balance all the factors just discussed: it must use only as much expensive memory as necessary while providing as much inexpensive, nonvolatile storage as possible.

Caches can be installed to improve performance where a large disparity in access time or transfer rate exists between two components. Recall from the beginning of this section that a general-purpose computer system consists of multiple devices, all of which exchange data via a common 1. To solve this problem, direct memory access DMA is used. Only one interrupt is generated per block, to tell the device driver that the operation has completed, rather than the one interrupt per byte generated for low-speed devices. While the device controller is performing these operations, the CPU is available to accomplish other work.

Some high-end systems use switch rather than bus architecture. On these systems, multiple components can talk to other components concurrently, rather than competing for cycles on a shared bus. In this case, DMA is even more effective. A computer system can be organized in a number of different ways, which we can categorize roughly according to the number of general-purpose processors used. The core is the component that executes instructions and registers for storing data locally. The one main CPU with its core is capable of executing a general-purpose instruction set, including instructions from processes.

These systems have other special-purpose proces- 16 Chapter 1 Introduction sors as well. They may come in the form of device-specific processors, such as disk, keyboard, and graphics controllers. All of these special-purpose processors run a limited instruction set and do not run processes. Sometimes, they are managed by the operating system, in that the operating system sends them information about their next task and monitors their status. For example, a disk-controller microprocessor receives a sequence of requests from the main CPU core and implements its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU.

In other systems or circumstances, special-purpose processors are low-level components built into the hardware. The operating system cannot communicate with these processors; they do their jobs autonomously. The use of special-purpose microprocessors is common and does not turn a single-processor system into a multiprocessor. If there is only one general-purpose CPU with a single processing core, then the system is a single-processor system. According to this definition, however, very few contemporary computer systems are single-processor systems. Traditionally, such systems have two or more processors, each with a single-core CPU. The processors share the computer bus and sometimes the clock, memory, and peripheral devices. The primary advantage of multiprocessor systems is increased throughput.

That is, by increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however; it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors. The most common multiprocessor systems use symmetric multiprocessing SMP , in which each peer CPU processor performs all tasks, including operating-system functions and user processes. Notice that each CPU processor has its own set of registers, as well as a private —or local— cache. However, all processors share physical memory over the system bus. The benefit of this model is that many processes can run simultaneously — N processes can run if there are N CPUs—without causing performance to deteriorate significantly. However, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies.

These inefficiencies can be avoided if the processors share certain data structures. A multiprocessor system of this form will allow processes and resources—such as memory—to be shared dynamically among the various processors and can lower the workload variance among the processors. Such a system must be written carefully, as we shall see in Chapter 5 and Chapter 6. The definition of multiprocessor has evolved over time and now includes multicore systems, in which multiple computing cores reside on a single chip. Multicore systems can be more efficient than multiple chips with single cores because on-chip communication is faster than between-chip communication. In addition, one chip with multiple cores uses significantly less power than multiple single-core chips, an important issue for mobile devices as well as laptops.

In Figure 1. In this design, each core has its own register set, as well as its own local cache, often known as a level 1, or L1, cache. Notice, too, that a level 2 L2 cache is local to the chip but is shared by the two processing cores. Most architectures adopt this approach, combining local and shared caches, where local, lower-level caches are generally smaller and faster than higher-level shared Figure 1. Processor — A physical chip that contains one or more CPUs. Core — The basic computation unit of the CPU. Multicore — Including multiple computing cores on the same CPU. Multiprocessor — Including multiple processors. Although virtually all systems are now multicore, we use the general term CPU when referring to a single computational unit of a computer system and core as well as multicore when specifically referring to one or more cores on a CPU. Aside from architectural considerations, such as cache, memory, and bus contention, a multicore processor with N cores appears to the operating system as N standard CPUs.

This characteristic puts pressure on operating-system designers—and application programmers—to make efficient use of these processing cores, an issue we pursue in Chapter 4. Virtually all modern operating systems—including Windows, macOS, and Linux, as well as Android and iOS mobile systems—support multicore SMP systems. Adding additional CPUs to a multiprocessor system will increase computing power; however, as suggested earlier, the concept does not scale very well, and once we add too many CPUs, contention for the system bus becomes a bottleneck and performance begins to degrade.

An alternative approach is instead to provide each CPU or group of CPUs with its own local memory that is accessed via a small, fast local bus. The CPUs are connected by a shared system interconnect, so that all CPUs share one physical address space. This approach—known as non-uniform memory access, or NUMA —is illustrated in Figure 1. The advantage is that, when a CPU accesses its local memory, not only is it fast, but there is also no contention over the system interconnect. Thus, NUMA systems can scale more effectively as more processors are added. A potential drawback with a NUMA system is increased latency when a CPU must access remote memory across the system interconnect, creating a possible performance penalty. In other words, for example, CPU0 cannot access the local memory of CPU3 as quickly as it can access its own local memory, slowing down performance.

Operating systems can minimize this NUMA penalty through careful CPU scheduling and memory management, as discussed in Section 5. Because NUMA systems can scale to accommodate a large number of processors, they are becoming increasingly popular on servers as well as high-performance computing systems. The difference between these and traditional multiprocessor systems is that each bladeprocessor board boots independently and runs its own operating system. Some blade-server boards are multiprocessor as well, which blurs the lines between 1. types of computers. In essence, these servers consist of multiple independent multiprocessor systems. Clustered systems differ from the multiprocessor systems described in Section 1. Such systems are considered loosely coupled. We should note that the definition of clustered is not concrete; many commercial and opensource packages wrestle to define what a clustered system is and why one form is better than another.

The generally accepted definition is that clustered computers share storage and are closely linked via a local-area network LAN as described in Chapter 19 or a faster interconnect, such as InfiniBand. Clustering is usually used to provide high-availability service—that is, service that will continue even if one or more systems in the cluster fail. Generally, we obtain high availability by adding a level of redundancy in the system. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others over the network. If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The users and clients of the applications see only a brief interruption of service. High availability provides increased reliability, which is crucial in many applications.

The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Some systems go beyond graceful degradation and are called fault tolerant, because they can suffer a failure of any single component and still continue operation. Fault tolerance requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. Clustering can be structured asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitor the active server. Even the lowest-cost generalpurpose CPU contains multiple cores. Some motherboards contain multiple processor sockets. More advanced computers allow more than one system board, creating NUMA systems.

In symmetric clustering, two or more hosts are running applications and are monitoring each other. This structure is obviously more efficient, as it uses all of the available hardware. However, it does require that more than one application be available to run. Since a cluster consists of several computer systems connected via a network, clusters can also be used to provide high-performance computing environments. Such systems can supply significantly greater computational power than single-processor or even SMP systems because they can run an application concurrently on all computers in the cluster. The application must have been written specifically to take advantage of the cluster, however.

This involves a technique known as parallelization, which divides a program into separate components that run in parallel on individual cores in a computer or computers in a cluster. Typically, these applications are designed so that once each computing node in the cluster has solved its portion of the problem, the results from all the nodes are combined into a final solution. Other forms of clusters include parallel clusters and clustering over a wide-area network WAN as described in Chapter Parallel clusters allow multiple hosts to access the same data on shared storage. Because most oper- 1. ating systems lack support for simultaneous data access by multiple hosts, parallel clusters usually require the use of special versions of software and special releases of applications. Each machine runs Oracle, and a layer of software tracks access to the shared disk.

Each machine has full access to all data in the database. To provide this shared access, the system must also supply access control and locking to ensure that no conflicting operations occur. This function, commonly known as a distributed lock manager DLM , is included in some cluster technology. Cluster technology is changing rapidly. Some cluster products support thousands of systems in a cluster, as well as clustered nodes that are separated by miles. Many of these improvements are made possible by storage-area networks SANs , as described in Section If the applications and their data are stored on the SAN, then the cluster software can assign the application to run on any host that is attached to the SAN. If the host fails, then any other host can take over. In a database cluster, dozens of hosts can share the same database, greatly increasing performance and reliability. An operating system provides the environment within which programs are executed. Internally, operating systems vary greatly, since they are organized along many different lines.

There are, however, many commonalities, which we consider in this section. For a computer to start running—for instance, when it is powered up or rebooted —it needs to have an initial program to run. As noted earlier, this initial program, or bootstrap program, tends to be simple. Typically, it is stored within the computer hardware in firmware. It initializes all aspects of the system, from CPU registers to device controllers to memory contents. The bootstrap program must know how to load the operating system and how to 22 Chapter 1 Introduction HADOOP Hadoop is an open-source software framework that is used for distributed processing of large data sets known as big data in a clustered system containing simple, low-cost hardware components.

Hadoop is designed to scale from a single system to a cluster containing thousands of computing nodes. Tasks are assigned to a node in the cluster, and Hadoop arranges communication between nodes to manage parallel computations to process and coalesce results. Hadoop also detects and manages failures in nodes, providing an efficient and highly reliable distributed computing service. Hadoop is organized around the following three components: 1. A distributed file system that manages data and files across distributed computing nodes. The MapReduce system, which allows parallel processing of data across nodes in the cluster. Hadoop is designed to run on Linux systems, and Hadoop applications can be written using several programming languages, including scripting languages such as PHP, Perl, and Python.

Java is a popular choice for developing Hadoop applications, as Hadoop has several Java libraries that support MapReduce. org start executing that system. To accomplish this goal, the bootstrap program must locate the operating-system kernel and load it into memory. Once the kernel is loaded and executing, it can start providing services to the system and its users. Some services are provided outside of the kernel by system programs that are loaded into memory at boot time to become system daemons, which run the entire time the kernel is running.

Once this phase is complete, the system is fully booted, and the system waits for some event to occur. Events are almost always signaled by the occurrence of an interrupt. In Section 1. Another form of interrupt is a trap or an exception , which is a software-generated interrupt caused either by an error for example, division by zero or invalid memory access or by a specific request from a user program that an operating-system service be performed by executing a special operation called a system call. Furthermore, users typically want to run more than one program at a time as well. Multiprogramming increases CPU utilization, as well as keeping users satisfied, by organizing programs so that the CPU always has one to execute. In a multiprogrammed system, a program in execution is termed a process. The idea is as follows: The operating system keeps several processes in memory simultaneously Figure 1.

The operating system picks and begins to execute one of these processes. In a non-multiprogrammed system, the CPU would sit idle. In a multiprogrammed system, the operating system simply switches to, and executes, another process. When that process needs to wait, the CPU switches to another process, and so on. Eventually, the first process finishes waiting and gets the CPU back. As long as at least one process needs to execute, the CPU is never idle. This idea is common in other life situations. A lawyer does not work for only one client at a time, for example.

While one case is waiting to go to trial or have papers typed, the lawyer can work on another case. If she has enough clients, the lawyer will never be idle for lack of work. Idle lawyers tend to become politicians, so there is a certain social value in keeping lawyers busy. Multitasking is a logical extension of multiprogramming. In multitasking systems, the CPU executes multiple processes by switching among them, but the switches occur frequently, providing the user with a fast response time. Input, for example, may be max operating system process 1 process 2 process 3 process 4 0 Figure 1. Rather than let the CPU sit idle as this interactive input takes place, the operating system will rapidly switch the CPU to another process. Having several processes in memory at the same time requires some form of memory management, which we cover in Chapter 9 and Chapter In addition, if several processes are ready to run at the same time, the system must choose which process will run next.

Making this decision is CPU scheduling, which is discussed in Chapter 5. Finally, running multiple processes concurrently requires that their ability to affect one another be limited in all phases of the operating system, including process scheduling, disk storage, and memory management. We discuss these considerations throughout the text. In a multitasking system, the operating system must ensure reasonable response time. A common method for doing so is virtual memory, a technique that allows the execution of a process that is not completely in memory Chapter The main advantage of this scheme is that it enables users to run programs that are larger than actual physical memory.

Further, it abstracts main memory into a large, uniform array of storage, separating logical memory as viewed by the user from physical memory. This arrangement frees programmers from concern over memory-storage limitations. Multiprogramming and multitasking systems must also provide a file system Chapter 13, Chapter 14, and Chapter The file system resides on a secondary storage; hence, storage management must be provided Chapter In addition, a system must protect resources from inappropriate use Chapter To ensure orderly execution, the system must also provide mechanisms for process synchronization and communication Chapter 6 and Chapter 7 , and it may ensure that processes do not get stuck in a deadlock, forever waiting for one another Chapter 8.

In order to ensure the proper execution of the system, we must be able to distinguish between the execution of operating-system code and user-defined code. The approach taken by most computer systems is to provide hardware support that allows differentiation among various modes of execution. At the very least, we need two separate modes of operation: user mode and kernel mode also called supervisor mode, system mode, or privileged mode. A bit, called the mode bit, is added to the hardware of the computer to indicate the current mode: kernel 0 or user 1. With the mode bit, we can distinguish between a task that is executed on behalf of the operating system and one that is executed on behalf of the user.

When the computer system is executing on behalf of a user application, the system is in user mode. However, when a user application requests a service from the operating system via a system call , the system must transition from user to kernel mode to fulfill 1. the request. This is shown in Figure 1. As we shall see, this architectural enhancement is useful for many other aspects of system operation as well. At system boot time, the hardware starts in kernel mode. The operating system is then loaded and starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode that is, changes the state of the mode bit to 0. Thus, whenever the operating system gains control of the computer, it is in kernel mode.

The system always switches to user mode by setting the mode bit to 1 before passing control to a user program. The dual mode of operation provides us with the means for protecting the operating system from errant users—and errant users from one another. We accomplish this protection by designating some of the machine instructions that may cause harm as privileged instructions. The hardware allows privileged instructions to be executed only in kernel mode. If an attempt is made to execute a privileged instruction in user mode, the hardware does not execute the instruction but rather treats it as illegal and traps it to the operating system. The instruction to switch to kernel mode is an example of a privileged instruction. Many additional privileged instructions are discussed throughout the text. The concept of modes can be extended beyond two modes.

For example, Intel processors have four separate protection rings, where ring 0 is kernel mode and ring 3 is user mode. Although rings 1 and 2 could be used for various operating-system services, in practice they are rarely used. ARMv8 systems have seven modes. CPUs that support virtualization Section In this mode, the VMM has more privileges than user processes but fewer than the kernel. It needs that level of privilege so it can create and manage virtual machines, changing the CPU state to do so. We can now better understand the life cycle of instruction execution in a computer system. Initial control resides in the operating system, where instructions are executed in kernel mode.

When control is given to a user application, the mode is set to user mode. Eventually, control is switched back to the operating system via an interrupt, a trap, or a system call. Most contemporary operating systems—such as Microsoft Windows, Unix, and Linux— 26 Chapter 1 Introduction take advantage of this dual-mode feature and provide greater protection for the operating system. A system call is invoked in a variety of ways, depending on the functionality provided by the underlying processor. In all forms, it is the method used by a process to request action by the operating system. A system call usually takes the form of a trap to a specific location in the interrupt vector. This trap can be executed by a generic trap instruction, although some systems have a specific syscall instruction to invoke a system call.

When a system call is executed, it is typically treated by the hardware as a software interrupt. Control passes through the interrupt vector to a service routine in the operating system, and the mode bit is set to kernel mode. The system-call service routine is a part of the operating system. The kernel examines the interrupting instruction to determine what system call has occurred; a parameter indicates what type of service the user program is requesting. Additional information needed for the request may be passed in registers, on the stack, or in memory with pointers to the memory locations passed in registers.

Search the history of over billion web pages on the Internet. Capture a web page as it appears now for use as a trusted citation in the future. Uploaded by JRSulfuro on July 29, Internet Archive logo A line drawing of the Internet Archive headquarters building façade. Search icon An illustration of a magnifying glass. User icon An illustration of a person's head and chest. Sign up Log in. Web icon An illustration of a computer application window Wayback Machine Texts icon An illustration of an open book. Books Video icon An illustration of two cells of a film strip. Video Audio icon An illustration of an audio speaker. Audio Software icon An illustration of a 3.

Software Images icon An illustration of two photographs. Images Donate icon An illustration of a heart shape Donate Ellipses icon An illustration of text ellipses. Internet Archive Audio Live Music Archive Librivox Free Audio. Featured All Audio This Just In Grateful Dead Netlabels Old Time Radio 78 RPMs and Cylinder Recordings. Metropolitan Museum Cleveland Museum of Art. Featured All Images This Just In Flickr Commons Occupy Wall Street Flickr Cover Art USGS Maps. Top NASA Images Solar System Collection Ames Research Center. Internet Arcade Console Living Room. Featured All Software This Just In Old School Emulation MS-DOS Games Historical Software Classic PC Games Software Library.

Top Kodi Archive and Support File Vintage Software APK MS-DOS CD-ROM Software CD-ROM Software Library Software Sites Tucows Software Library Shareware CD-ROMs Software Capsules Compilation CD-ROM Images ZX Spectrum DOOM Level CD. Books to Borrow Open Library. Featured All Books All Texts This Just In Smithsonian Libraries FEDLINK US Genealogy Lincoln Collection. Top American Libraries Canadian Libraries Universal Library Project Gutenberg Children's Library Biodiversity Heritage Library Books by Language Additional Collections. Featured All Video This Just In Prelinger Archives Democracy Now! Occupy Wall Street TV NSA Clip Library. Search the Wayback Machine Search icon An illustration of a magnifying glass. Mobile Apps Wayback Machine iOS Wayback Machine Android Browser Extensions Chrome Firefox Safari Edge. Archive-It Subscription Explore the Collections Learn More Build Collections. Sign up for free Log in. Search metadata Search text contents Search TV news captions Search radio transcripts Search archived web sites Advanced Search.

Operating System Concepts 10th Item Preview. remove-circle Share or Embed This Item. EMBED for wordpress. com hosted blogs and archive. Want more? Advanced embedding details, examples, and help! Publication date Topics operating system , textbook Collection opensource Language English. Galvin, and Greg Gagne. plus-circle Add Review. There are no reviews yet. Be the first one to write a review. download 1 file. download 11 Files download 6 Original. SIMILAR ITEMS based on metadata.

Operating System Concepts, 10th Edition,Description of Operating System Concepts

Inside operating system concepts 10th edition pdf free download, you’ll find: Writing principles: Cause-effect and top-down approaches to problem solving Understanding the relationships 29/07/ · Operating System Concepts 10th: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne: Free Download, Borrow, and Streaming: Internet Archive Operating System Concepts 10th by Abraham Silberschatz, Peter Free download Operating System Concepts 10th Edition by Abraham Silberschatz, Peter B. Galvin and Greg Gagne. Published by Wiley. English | Pages | PDF | ISBN: Operating-System Operations Multiprogramming and Multitasking Dual-Mode and Multimode Operation Timer Resource Management Process Management 5/03/ · You will find Operating System Concepts, 10th edition PDF which can be downloaded for FREE on this page. Operating System Concepts, 10th edition is useful when preparing for Welcome to the Web Page supporting Operating System Concepts, Tenth Edition. This new edition (April 15, ), which is published by. John Wiley & Sons, is available for purchase ... read more

In multitasking systems, the CPU executes multiple processes by switching among them, but the switches occur frequently, providing the user with a fast response time. We put a lot of effort and resources to keep the materials you enjoy in LearnClax free. These devices are typically connected to networks through cellular or other wireless technologies. Download Free PDF. The principle of least privilege is emphasized. To provide such protection, we must ensure that only processes that have gained proper authorization from the operating system can operate on the files, memory, CPU, and other resources of the system. Introduction to Computer Science 1 study questions Department: Science and Technology Year Of exam: school: University of Ilorin course code: CSC Topics : data, information, storage device, memory, flowchart, pseudo code, number system, boolean algebra, Karnaugh map, logic gate, operating system, system software, application software Go to Introduction to Computer Science 1 study questions past question.

Some operating systems take on this task, while others leave tertiary-storage management to application programs. Contacting Us We have endeavored to eliminate typos, bugs, and the like from the text. Thus, whenever the operating system gains control of the computer, it is in kernel mode. Richard West provided insight on areas of virtualization research. Once this phase is complete, the system is fully booted, and the system waits for some event to occur.

Categories: