The translation is temporarily closed for contributions due to maintenance, please come back later.
English Chinese (Simplified) (zh_CN)
The <acronym>BIOS</acronym> The <acronym>BIOS</acronym>
<literal>boot2</literal> defines an important structure, <literal>struct bootinfo</literal>. This structure is initialized by <literal>boot2</literal> and passed to the loader, and then further to the kernel. Some nodes of this structures are set by <literal>boot2</literal>, the rest by the loader. This structure, among other information, contains the kernel filename, <acronym>BIOS</acronym> harddisk geometry, <acronym>BIOS</acronym> drive number for boot device, physical memory available, <literal>envp</literal> pointer etc. The definition for it is: boot2 定义了很重要的引导信息数据结构。此结构由 boot2 初始化,然后传递到加载程序,再传到内核。
#include &lt;sys/kernel.h&gt;

void foo_null(void *unused)
{
foo_doo();
}
SYSINIT(foo, SI_SUB_FOO, SI_ORDER_FOO, foo_null, NULL);

struct foo foo_voodoo = {
FOO_VOODOO;
}

void foo_arg(void *vdata)
{
struct foo *foo = (struct foo *)vdata;
foo_data(foo);
}
SYSINIT(bar, SI_SUB_FOO, SI_ORDER_FOO, foo_arg, &amp;foo_voodoo);
#include &lt;sys/kernel.h&gt;

void foo_null(void *unused)
{
foo_doo();
}
SYSINIT(foo, SI_SUB_FOO, SI_ORDER_FOO, foo_null, NULL);

struct foo foo_voodoo = {
FOO_VOODOO;
}

void foo_arg(void *vdata)
{
struct foo *foo = (struct foo *)vdata;
foo_data(foo);
}
SYSINIT(bar, SI_SUB_FOO, SI_ORDER_FOO, foo_arg, &amp;foo_voodoo);
#include &lt;sys/kernel.h&gt;

void foo_cleanup(void *unused)
{
foo_kill();
}
SYSUNINIT(foobar, SI_SUB_FOO, SI_ORDER_FOO, foo_cleanup, NULL);

struct foo_stack foo_stack = {
FOO_STACK_VOODOO;
}

void foo_flush(void *vdata)
{
}
SYSUNINIT(barfoo, SI_SUB_FOO, SI_ORDER_FOO, foo_flush, &amp;foo_stack);
#include &lt;sys/kernel.h&gt;

void foo_cleanup(void *unused)
{
foo_kill();
}
SYSUNINIT(foobar, SI_SUB_FOO, SI_ORDER_FOO, foo_cleanup, NULL);

struct foo_stack foo_stack = {
FOO_STACK_VOODOO;
}

void foo_flush(void *vdata)
{
}
SYSUNINIT(barfoo, SI_SUB_FOO, SI_ORDER_FOO, foo_flush, &amp;foo_stack);
Label a socket, <parameter>newsocket</parameter>, newly <citerefentry><refentrytitle>accept</refentrytitle><manvolnum>2</manvolnum></citerefentry>ed, based on the <citerefentry><refentrytitle>listen</refentrytitle><manvolnum>2</manvolnum></citerefentry> socket, <parameter>oldsocket</parameter>. 根据 <citerefentry><refentrytitle>listen</refentrytitle><manvolnum>2</manvolnum></citerefentry>套接字 <parameter>oldsocket</parameter>,为新建 <citerefentry><refentrytitle>accept</refentrytitle><manvolnum>2</manvolnum> 的套接字 <parameter>newsocket</parameter>,设置标记。
Management of Physical Memory—<literal>vm_page_t</literal> 物理内存的管理&mdash;<literal>vm_page_t</literal>
The Unified Buffer Cache—<literal>vm_object_t</literal> 统一的缓存信息结构体&mdash;<literal>vm_object_t</literal>
FreeBSD implements the idea of a generic <quote>VM object</quote>. VM objects can be associated with backing store of various types—unbacked, swap-backed, physical device-backed, or file-backed storage. Since the filesystem uses the same VM objects to manage in-core data relating to files, the result is a unified buffer cache. FreeBSD实现了统一的<quote>虚拟内存对象</quote>(VM对象)的设计思想。VM对象可以与各种类型的内存使用方式相结合&mdash;直接使用(unbacked)、交换(swap)、物理设备、文件。由于文件系统使用相同的VM对象管理核内数据&mdash;文件的缓存,所以这些缓存的结构也是统一的。
Filesystem I/O—<literal>struct buf</literal> 文件系统输入/输出&mdash;<literal>buf</literal>结构体
Mapping Page Tables—<literal>vm_map_t, vm_entry_t</literal> 映射页表&mdash;<literal>vm_map_t, vm_entry_t</literal>
Following the pattern of several other multi-threaded <trademark class="registered">UNIX</trademark> kernels, FreeBSD deals with interrupt handlers by giving them their own thread context. Providing a context for interrupt handlers allows them to block on locks. To help avoid latency, however, interrupt threads run at real-time kernel priority. Thus, interrupt handlers should not execute for very long to avoid starving other kernel threads. In addition, since multiple handlers may share an interrupt thread, interrupt handlers should not sleep or use a sleepable lock to avoid starving another interrupt handler. 与许多其它多线程 &unix; 内核所采取的模式类似, FreeBSD会赋予中断处理程序独立的线程上下文,这样做能够让中断线程在遇到锁时阻塞。 但为了避免不必要的延迟,中断线程在内核中, 是以实时线程的优先级运行的。 因此,中断处理程序不应执行过久, 以免饿死其它内核线程。 此外,由于多个处理程序可以分享同一中断线程, 中断处理程序不应休眠,或使用可能导致休眠的锁, 以避免将其它中断处理程序饿死。
Most devices in a <trademark class="registered">UNIX</trademark>-like operating system are accessed through device-nodes, sometimes also called special files. These files are usually located under the directory <filename>/dev</filename> in the filesystem hierarchy. 类&unix;操作系统中的大多数设备都是通过设备节点来访问的,有时也被称为特殊文件。这些文件在文件系统的层次结构中通常位于<filename>/dev</filename>目录下。在FreeBSD 5.0-RELEASE以前的发行版中, 对<citerefentry><refentrytitle>devfs</refentrytitle><manvolnum>5</manvolnum></citerefentry>的支持还没有被集成到FreeBSD中,每个设备节点必须要静态创建,并且独立于相关设备驱动程序的存在。系统中大多数设备节点是通过运行<command>MAKEDEV</command>创建的。
<command>kldload</command> - loads a new kernel module <command>kldload</command> - loads a new kernel module
<command>kldstat</command> - lists loaded modules <command>kldstat</command> - lists loaded modules
Example of a Sample Echo Pseudo-Device Driver for FreeBSD 10.X - 12.X 适用于 FreeBSD 10.X - 12.X 的回显伪设备驱动程序实例
Other <trademark class="registered">UNIX</trademark> systems may support a second type of disk device known as block devices. Block devices are disk devices for which the kernel provides caching. This caching makes block-devices almost unusable, or at least dangerously unreliable. The caching will reorder the sequence of write operations, depriving the application of the ability to know the exact disk contents at any one instant in time. 其他&unix;系统支持另一类型的磁盘设备,称为块设备。块设备是内核为它们提供缓冲的磁盘设备。这种缓冲使得块设备几乎没有用,或者说非常不可靠。缓冲会重新安排写操作的次序,使得应用程序丧失了在任何时刻及时知道准确的磁盘内容的能力。这导致对磁盘数据结构(文件系统,数据库等)的可预测的和可靠的崩溃恢复成为不可能。由于写操作被延迟,内核无法向应用程序报告哪个特定的写操作遇到了写错误,这又进一步增加了一致性问题。由于这个原因,真正的应用程序从不依赖于块设备,事实上,几乎所有访问磁盘的应用程序都尽力指定总是使用字符(或<quote>raw</quote>)设备。由于实现将每个磁盘(分区)同具有不同语义的两个设备混为一谈,从而致使相关内核代码极大地复杂化,作为推进磁盘I/O基础结构现代化的一部分,&os;抛弃了对带缓冲的磁盘设备的支持。
As the PnP devices are disabled when probing the legacy devices they will not be attached twice (once as legacy and once as PnP). But in case of device-dependent identify routines it is the responsibility of the driver to make sure that the same device will not be attached by the driver twice: once as legacy user-configured and once as auto-identified. 由于探测老设备的时候PnP设备被禁用,它们不会被连接两次(一次作为老设备,一次作为PnP)。但如果识别例程设备相关的,这种情况下设备驱动程序有责任确保同一设备不会被设备驱动程序连接两次:一次作为老的由用户配置的,一次作为自动识别的。
Free the memory allocated by <function>bus_dmamem_alloc()</function>. At present, freeing of the memory allocated with ISA restrictions is not implemented. Due to this the recommended model of use is to keep and re-use the allocated areas for as long as possible. Do not lightly free some area and then shortly allocate it again. That does not mean that <function>bus_dmamem_free()</function> should not be used at all: hopefully it will be properly implemented soon. 释放由<function>bus_dmamem_alloc()</function>分配的内存。目前,对分配的带有ISA限制的内存的释放没有实现。因此,建议的使用模型为尽可能长时间地保持和重用分配的区域。不要轻易地释放某些区域,然后再短时间地分配它。这并不意味着不应当使用<function>bus_dmamem_free()</function>:希望很快它就会被完整地实现。
Before calling the callback function from <function>bus_dmamap_load()</function> the segment array is stored in the stack. And it gets pre-allocated for the maximal number of segments allowed by the tag. As a result of this the practical limit for the number of segments on i386 architecture is about 250-300 (the kernel stack is 4KB minus the size of the user structure, size of a segment array entry is 8 bytes, and some space must be left). Since the array is allocated based on the maximal number this value must not be set higher than really needed. Fortunately, for most of hardware the maximal supported number of segments is much lower. But if the driver wants to handle buffers with a very large number of scatter-gather segments it should do that in portions: load part of the buffer, transfer it to the device, load next part of the buffer, and so on. 从<function>bus_dmamap_load()</function>中调用回调函数之前,段数组是存储在栈中的。并且是按标签允许的最大数目的段预先分配好的。这样由于i386体系结构上对段数目的实际限制约为250-300(内核栈为4KB减去用户结构的大小,段数组条目的大小为8字节,和其它必须留出来的空间)。由于数组基于最大数目而分配,因此这个值必须不能设置成超出实际需要。幸运的是,对于大多数硬件而言,所支持的段的最大数目低很多。但如果驱动程序想处理具有非常多分散/收集段的缓冲区,则应当一部分一部分地处理:加载缓冲区的一部分,传输到设备,然后加载缓冲区的下一部分,如此反复。
Then allocate and activate all the necessary resources. As normally the port range will be released before returning from probe, it has to be allocated again. We expect that the probe routine had properly set all the resource ranges, as well as saved them in the structure softc. If the probe routine had left some resource allocated then it does not need to be allocated again (which would be considered an error). 然后分配并激活所需资源。由于端口范围通常在从探测返回前就被释放,因此需要重新分配。我们希望探测例程已经适当地设置了所有的资源范围,并将它们保存在结构softc中。如果探测例程留下了一些被分配的资源,就不需要再次分配(重新分配被视为错误)。