Translation

(itstool) path: sect1/para
Each process has its own private address space. The address space is initially divided into three logical segments: <emphasis>text</emphasis>, <emphasis>data</emphasis>, and <emphasis>stack</emphasis>. The text segment is read-only and contains the machine instructions of a program. The data and stack segments are both readable and writable. The data segment contains the initialized and uninitialized data portions of a program, whereas the stack segment holds the application's run-time stack. On most machines, the stack segment is extended automatically by the kernel as the process executes. A process can expand or contract its data segment by making a system call, whereas a process can change the size of its text segment only when the segment's contents are overlaid with data from the filesystem, or when debugging takes place. The initial contents of the segments of a child process are duplicates of the segments of a parent process.
0/9470
Context English Portuguese (Brazil) State
Processes are scheduled for execution according to a <emphasis>process-priority</emphasis> parameter. This priority is managed by a kernel-based scheduling algorithm. Users can influence the scheduling of a process by specifying a parameter (<emphasis>nice</emphasis>) that weights the overall scheduling priority, but are still obligated to share the underlying CPU resources according to the kernel's scheduling policy.
Signals Sinais
The system defines a set of <emphasis>signals</emphasis> that may be delivered to a process. Signals in 4.4BSD are modeled after hardware interrupts. A process may specify a user-level subroutine to be a <emphasis>handler</emphasis> to which a signal should be delivered. When a signal is generated, it is blocked from further occurrence while it is being <emphasis>caught</emphasis> by the handler. Catching a signal involves saving the current process context and building a new one in which to run the handler. The signal is then delivered to the handler, which can either abort the process or return to the executing process (perhaps after setting a global variable). If the handler returns, the signal is unblocked and can be generated (and caught) again.
Alternatively, a process may specify that a signal is to be <emphasis>ignored</emphasis>, or that a default action, as determined by the kernel, is to be taken. The default action of certain signals is to terminate the process. This termination may be accompanied by creation of a <emphasis>core file</emphasis> that contains the current memory image of the process for use in postmortem debugging.
Some signals cannot be caught or ignored. These signals include <emphasis>SIGKILL</emphasis>, which kills runaway processes, and the job-control signal <emphasis>SIGSTOP</emphasis>.
A process may choose to have signals delivered on a special stack so that sophisticated software stack manipulations are possible. For example, a language supporting coroutines needs to provide a stack for each coroutine. The language run-time system can allocate these stacks by dividing up the single stack provided by 4.4BSD. If the kernel does not support a separate signal stack, the space allocated for each coroutine must be expanded by the amount of space required to catch a signal.
All signals have the same <emphasis>priority</emphasis>. If multiple signals are pending simultaneously, the order in which signals are delivered to a process is implementation specific. Signal handlers execute with the signal that caused their invocation to be blocked, but other signals may yet occur. Mechanisms are provided so that processes can protect critical sections of code against the occurrence of specified signals.
The detailed design and implementation of signals is described in Section 4.7.
Process Groups and Sessions Processar Grupos e Sessões
Processes are organized into <emphasis>process groups</emphasis>. Process groups are used to control access to terminals and to provide a means of distributing signals to collections of related processes. A process inherits its process group from its parent process. Mechanisms are provided by the kernel to allow a process to alter its process group or the process group of its descendents. Creating a new process group is easy; the value of a new process group is ordinarily the process identifier of the creating process.
The group of processes in a process group is sometimes referred to as a <emphasis>job</emphasis> and is manipulated by high-level system software, such as the shell. A common kind of job created by a shell is a <emphasis>pipeline</emphasis> of several processes connected by pipes, such that the output of the first process is the input of the second, the output of the second is the input of the third, and so forth. The shell creates such a job by forking a process for each stage of the pipeline, then putting all those processes into a separate process group.
A user process can send a signal to each process in a process group, as well as to a single process. A process in a specific process group may receive software interrupts affecting the group, causing the group to suspend or resume execution, or to be interrupted or terminated.
A terminal has a process-group identifier assigned to it. This identifier is normally set to the identifier of a process group associated with the terminal. A job-control shell may create a number of process groups associated with the same terminal; the terminal is the <emphasis>controlling terminal</emphasis> for each process in these groups. A process may read from a descriptor for its controlling terminal only if the terminal's process-group identifier matches that of the process. If the identifiers do not match, the process will be blocked if it attempts to read from the terminal. By changing the process-group identifier of the terminal, a shell can arbitrate a terminal among several different jobs. This arbitration is called <emphasis>job control</emphasis> and is described, with process groups, in Section 4.8.
Just as a set of related processes can be collected into a process group, a set of process groups can be collected into a <emphasis>session</emphasis>. The main uses for sessions are to create an isolated environment for a daemon process and its children, and to collect together a user's login shell and the jobs that that shell spawns.
Memory Management Gerenciamento de Memória
Each process has its own private address space. The address space is initially divided into three logical segments: <emphasis>text</emphasis>, <emphasis>data</emphasis>, and <emphasis>stack</emphasis>. The text segment is read-only and contains the machine instructions of a program. The data and stack segments are both readable and writable. The data segment contains the initialized and uninitialized data portions of a program, whereas the stack segment holds the application's run-time stack. On most machines, the stack segment is extended automatically by the kernel as the process executes. A process can expand or contract its data segment by making a system call, whereas a process can change the size of its text segment only when the segment's contents are overlaid with data from the filesystem, or when debugging takes place. The initial contents of the segments of a child process are duplicates of the segments of a parent process.
The entire contents of a process address space do not need to be resident for a process to execute. If a process references a part of its address space that is not resident in main memory, the system <emphasis>pages</emphasis> the necessary information into memory. When system resources are scarce, the system uses a two-level approach to maintain available resources. If a modest amount of memory is available, the system will take memory resources away from processes if these resources have not been used recently. Should there be a severe resource shortage, the system will resort to <emphasis>swapping</emphasis> the entire context of a process to secondary storage. The <emphasis>demand paging</emphasis> and <emphasis>swapping</emphasis> done by the system are effectively transparent to processes. A process may, however, advise the system about expected future memory utilization as a performance aid.
BSD Memory-Management Design Decisions Decisões de Design de Gerenciamento de Memória BSD
The support of large sparse address spaces, mapped files, and shared memory was a requirement for 4.2BSD. An interface was specified, called <emphasis>mmap</emphasis>, that allowed unrelated processes to request a shared mapping of a file into their address spaces. If multiple processes mapped the same file into their address spaces, changes to the file's portion of an address space by one process would be reflected in the area mapped by the other processes, as well as in the file itself. Ultimately, 4.2BSD was shipped without the <emphasis>mmap</emphasis> interface, because of pressure to make other features, such as networking, available.
Further development of the <emphasis>mmap</emphasis> interface continued during the work on 4.3BSD. Over 40 companies and research groups participated in the discussions leading to the revised architecture that was described in the Berkeley Software Architecture Manual <xref linkend="biblio-mckusick-1"/>. Several of the companies have implemented the revised interface <xref linkend="biblio-gingell"/>.
Once again, time pressure prevented 4.3BSD from providing an implementation of the interface. Although the latter could have been built into the existing 4.3BSD virtual-memory system, the developers decided not to put it in because that implementation was nearly 10 years old. Furthermore, the original virtual-memory design was based on the assumption that computer memories were small and expensive, whereas disks were locally connected, fast, large, and inexpensive. Thus, the virtual-memory system was designed to be frugal with its use of memory at the expense of generating extra disk traffic. In addition, the 4.3BSD implementation was riddled with VAX memory-management hardware dependencies that impeded its portability to other computer architectures. Finally, the virtual-memory system was not designed to support the tightly coupled multiprocessors that are becoming increasingly common and important today.
Attempts to improve the old implementation incrementally seemed doomed to failure. A completely new design, on the other hand, could take advantage of large memories, conserve disk transfers, and have the potential to run on multiprocessors. Consequently, the virtual-memory system was completely replaced in 4.4BSD. The 4.4BSD virtual-memory system is based on the Mach 2.0 VM system <xref linkend="biblio-tevanian"/>. with updates from Mach 2.5 and Mach 3.0. It features efficient support for sharing, a clean separation of machine-independent and machine-dependent features, as well as (currently unused) multiprocessor support. Processes can map files anywhere in their address space. They can share parts of their address space by doing a shared mapping of the same file. Changes made by one process are visible in the address space of the other process, and also are written back to the file itself. Processes can also request private mappings of a file, which prevents any changes that they make from being visible to other processes mapping the file or being written back to the file itself.
Another issue with the virtual-memory system is the way that information is passed into the kernel when a system call is made. 4.4BSD always copies data from the process address space into a buffer in the kernel. For read or write operations that are transferring large quantities of data, doing the copy can be time consuming. An alternative to doing the copying is to remap the process memory into the kernel. The 4.4BSD kernel always copies the data for several reasons:
Often, the user data are not page aligned and are not a multiple of the hardware page length.
If the page is taken away from the process, it will no longer be able to reference that page. Some programs depend on the data remaining in the buffer even after those data have been written.
If the process is allowed to keep a copy of the page (as it is in current 4.4BSD semantics), the page must be made <emphasis>copy-on-write</emphasis>. A copy-on-write page is one that is protected against being written by being made read-only. If the process attempts to modify the page, the kernel gets a write fault. The kernel then makes a copy of the page that the process can modify. Unfortunately, the typical process will immediately try to write new data to its output buffer, forcing the data to be copied anyway.
When pages are remapped to new virtual-memory addresses, most memory-management hardware requires that the hardware address-translation cache be purged selectively. The cache purges are often slow. The net effect is that remapping is slower than copying for blocks of data less than 4 to 8 Kbyte.
The biggest incentives for memory mapping are the needs for accessing big files and for passing large quantities of data between processes. The <emphasis>mmap</emphasis> interface provides a way for both of these tasks to be done without copying.
Memory Management Inside the Kernel Gerenciamento de memória dentro do kernel
The kernel often does allocations of memory that are needed for only the duration of a single system call. In a user process, such short-term memory would be allocated on the run-time stack. Because the kernel has a limited run-time stack, it is not feasible to allocate even moderate-sized blocks of memory on it. Consequently, such memory must be allocated through a more dynamic mechanism. For example, when the system must translate a pathname, it must allocate a 1-Kbyte buffer to hold the name. Other blocks of memory must be more persistent than a single system call, and thus could not be allocated on the stack even if there was space. An example is protocol-control blocks that remain throughout the duration of a network connection.
Demands for dynamic memory allocation in the kernel have increased as more services have been added. A generalized memory allocator reduces the complexity of writing code inside the kernel. Thus, the 4.4BSD kernel has a single memory allocator that can be used by any part of the system. It has an interface similar to the C library routines <emphasis>malloc</emphasis> and <emphasis>free</emphasis> that provide memory allocation to application programs <xref linkend="biblio-mckusick-2"/>. Like the C library interface, the allocation routine takes a parameter specifying the size of memory that is needed. The range of sizes for memory requests is not constrained; however, physical memory is allocated and is not paged. The free routine takes a pointer to the storage being freed, but does not require the size of the piece of memory being freed.

Loading…

User avatar None

New source string

FreeBSD Doc / books_design-44bsdPortuguese (Brazil)

New source string a year ago
Browse all component changes

Glossary

English Portuguese (Brazil)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect1/para
Source string location
book.translate.xml:886
String age
a year ago
Source string age
a year ago
Translation file
books/pt_BR/design-44bsd.po, string 164