System Requirements This chapter gives an operational overview of LoPAR systems and introduces platform specific software and/or firmware components that are required for OS support. This chapter also addresses some system level requirements that are broad in nature and are fundamental to the architecture described in later chapters. Lastly, a table of requirements is presented as a guide for platform providers.
System Operation
Control Flow is an example of typical phases of operation from power-on to full system operation to termination. This section gives an overview of the processes involved in moving through these phases of operation. This section will introduce concepts and terms that will be explained in more detail in the following chapters. Most requirements relating to these processes will also appear in later chapters. The discussion in this chapter will be restricted to systems with a single processor. Refer to for the unique requirements relating to multiprocessor systems.
Phases of Operation (example)
POST Power On Self Test (POST) is the process by which the firmware tests those areas of the hardware that are critical to its ability to carry out the boot process. It is not intended to be all-inclusive or to be sophisticated in how it relates to the user. Diagnostics with these characteristics will generally be provided as a service aid. Platform Implementation Note: The platform may choose to utilize a service processor to assist in the implementation of functions during various phases of operation. The service (or support) processor is not a requirement of this architecture, but is usually seen in the larger systems.
Boot Phase The following sections describe the boot phase of operation. The fundamental parts of the boot phase are: Identify and configure system components. Generate a device tree. Initialize/reset system components. Locate an OS boot image. Load the boot image into memory.
Identify and Configure System Components Firmware is generally written with a hardware in mind, so some components and their configuration data can be hardcoded. Examples of these components are: type of processor, cache characteristics, and the use of imbedded components on the planar. This hardcoding is not a requirement, only a practical approach to a part of this task. R1--1. The firmware must, by various means, become aware of all components in the system associated with the boot process and configure or reset those components into a known state (components include, for example, buses, bridges, I/O Adapters (IOAs) See for the definition of an IOA., and I/O devices). R1--2. The firmware must obtain certain system information which is necessary to build the OF device tree from “walking” the I/O buses (for example, identification of IOAs and bridges).
Generate a Device Tree R1--1. The firmware must build a device tree and the OS must gain access to the device tree through Client Interface Services (CIS). R1--2. Configuration information (configuration variables) which are stored in non-volatile memory must be stored under the partition names of-config or common, depending on the nature of the information (see ).
Initialize/Reset System Components The OS requires devices to be in a known state at the time control is transferred from the firmware. Firmware may gain control with the hardware in various states depending on what has initiated the boot process. Normal boot: Initiated by a power-on sequence; all devices and registers begin in a hardware reset state. Reboot: Device state is unpredictable at the start of a reboot. The hardware reset state for a device is an inactive state. An inactive state is defined as a state that allows no system level activity; there can be no bus activity, interrupt requests, or DMA requests possible from the IOA that is in a reset state. Since the OS may configure devices in a manner that requires very specific control over these functions to avoid transitory resource conflicts, these functions should be disabled at the device and not at a central controlling agent (for example, the interrupt controller). Devices that do not share any resources may have these resources disabled at a system level (for example, keyboard interrupts may be disabled at the interrupt controller in standard configurations). R1--1. IOAs must adhere to the reset states given in when control of the system is passed from firmware to an OS. IOA Reset States Bus IOAs Left Open by OF Other IOAs PCI Interrupts not active No outstanding I/O operations IOA is configured The IOA is inactive: I/O access response disabled Memory access response disabled PCI master access disabled Interrupts not active IOA is reset (see note) System Configured per OF device tree Interrupts inactive DMA inactive No outstanding I/O operations The IOA is in hardware reset state (see note) or inactive: Interrupts inactive DMA inactive
R1--2. The platform must include the root node OF device tree property “ibm,pci-full-cfg” with a value of 1 and configure the configuration registers of all PCI IOAs and bridges as specified by Requirement . R1--3. Prior to passing control to the OS, the platform must initialize all processor registers to a value which, if accessed, will not yield a machine check. R1--4. Prior to passing control to the OS, the platform must initialize all registers not visible to the OS to a state that is consistent with the system view represented by the OF device tree. R1--5. During boot or reboot operations and prior to passing control to the OS, the platform must initialize the interrupt controller. R1--6. Hardware must provide a mechanism, callable by software, to hard reset all processors and I/O subsystems in order to facilitate the implementation of the RTAS system-reboot function.
Platform Implementation Note: The platform is required to reset the interrupt controller to avoid inconsistency among the states of IOAs, the interrupt controller, and software interrupt handler routines. The reset state is shown in . Software and Firmware Implementation Note: The conventional PCI configuration registers are further described in the and are copied into OF properties described in the . PCI-X configuration registers are further described in the . PCI Express configuration registers are further described in the . PCI-X IOAs and bridges and PCI Express IOAs, bridges, and switches are treated the same as conventional PCI IOAs and bridges for purposes of generation of OF properties. Software and Firmware Implementation Note: In reference to Requirement , generally the initial value of processor registers is contained in the processor binding. However, some processors have deviations on register usage. Also, since some register implementation is optional, all processors are not the same.
Locate an OS Boot Image The OS boot image is located as described in . A device and filename can be specified directly from the command interpreter (the boot command) or OF will locate the image through an automatic boot process controlled by configuration variables. Once a boot image is located, the device path is set in the device tree as the “bootpath” property of the chosen node. The devices searched by the automatic boot process are those contained in the boot-device configuration variable. Implementations may choose to limit the number of boot device entries that are searched. The root node device tree property “ibm,max-boot-devices” communicates the number of boot-device entries that the platform processes. If multi-boot (multiple bootable OSs residing on the same platform) is supported, a configuration variable instructs the firmware to display a multi-boot menu from which the OS and bootpath are selected. See for information relating to the multiboot process. R1--1. The platform must supply in the OF root node the “ibm,max-boot-devices” property.
Load the Boot Image into Memory After locating the image, it is loaded into memory at the location given by a configuration variable or as specified by the OS load image format.
Boot Process The boot process is described in . Steps in the process are reviewed here, but the authoritative and complete description of the process is included in . is a depiction of the boot flow showing the action of the f1, f5, and f6 function keys. The figure should only be used as an aid in understanding the requirements for LoPAR systems.  
Boot Process
The Boot Prompt R1--1. After the banner step of the boot sequence, the platform display must present a clearly visible graphical or text message (boot prompt), and must provide a reaction window of at least 3 seconds that prompts the user to activate various options including the f1, f5, and f6 control keys detailed in this document. R1--2. The functions provided by f1, f5 and f6 described in this chapter must be equivalently provided by the tty numeral keys 1, 5, and 6, respectively when a serial terminal is attached. R1--3. The boot prompt must identify the platform and communicate to the user that there are options that may be invoked to alter the boot process.
The Menus Once the boot prompt is displayed, the System Management Services (SMS) menu can be invoked. SMS provides a user interface for utilities, configuration, and the Multiboot Menu (as introduced in ) for boot/install and the OF command interpreter. The Multiboot menu is formatted so that block devices that currently contain boot information are most easily selected by the user. Because of the serial nature of byte devices, they should not be opened unless specifically included in a boot list. The user may also wish to add devices to the boot-device and/or diag-device configuration variables (boot lists) that currently do not contain boot information. The Multiboot menu presents these devices in a secondary manner. If the Multiboot Menu boot/install option is chosen, OF will execute the bootinfo.txt<boot-script> of the selected OS, and if the user elects to make this the default, the boot-command variable will be set equal to the contents of bootinfo.txt<boot-script>. R1--1. The SMS menu must provide a means to display the Multiboot Menu. R1--2. If, after the boot prompt is displayed, auto-boot? = false and menu? = true, the firmware must display the Multiboot Menu directly. R1--3. The Multiboot Menu must present all potential boot device options, differentiating block devices that contain locatable bootinfo objects. R1--4. Firmware must evaluate all bootinfo objects at each invocation of the Multiboot Menu to ensure that any modifications made by the OS will be included. R1--5. The Multiboot Menu must provide a means to enter the currently selected boot option into the desired location within the boot-device/boot-file or diag-device/diag-file configuration variables. R1--6. The platform must provide a means to delete individual boot options from the boot-device/boot-file and diag-device/diag-file configuration variables. R1--7. The Multiboot Menu must provide an option for the user to select whether or not to return to the Multiboot Menu on each boot. Firmware Implementation Note: Returning to the Multiboot Menu on reboot is controlled with the auto-boot? and menu? configuration variables.
The f1 Key The boot process is further controlled by the auto-boot? and menu? OF configuration variables and the f1 key. R1--1. If, after the boot prompt is displayed, function key f1 is pushed or if auto-boot? = false and menu? = false, the firmware must display the System Management Services (SMS) menu. R1--2. The default value for the auto-boot? configuration variable must be true. R1--3. The default value for the menu? configuration variable must be false.
The f5 and f6 Keys If auto-boot? = true, the commands specified by the boot-command configuration variable are executed. If the boot command has no arguments, IEEE 1275 states that the arguments are determined as follows: Normal Boot - If the diagnostic-mode?FCode function returns false, the boot device is given by boot-device and the default boot arguments are given by boot-file. Diagnostics Boot - If the diagnostic-mode? FCode function returns true, the boot device is given by diag-device and the default boot arguments are given by diag-file. Platform Implementation Note: boot-device, boot-file, diag-device and diag-file are potentially multi-entry strings. The boot-command searches the devices specified in boot-device/diag-device in the order defined by the string for the boot-file/diag-file to load into system memory. Failure occurs only if no corresponding file is found/usable on any of the specified devices. Platforms give the user the ability to control the boot process further with function keys f5 and f6 (within the window described in Requirement ). R1--1. If, after the boot prompt is displayed, function key f5 is pushed (and auto-boot? = true), then diagnostic-mode? must return true and the default diagnostic device as defined in Requirements and must be used to locate bootable media. R1--2. If, after the boot prompt is displayed, function key f6 is pushed (and auto-boot? = true), then diagnostic-mode? must return true and diag-device must be used to locate the boot image, else if diag-device is empty, then its default as defined in Requirements and must be used to locate bootable media. R1--3. boot-command must default to boot <with no arguments>. R1--4. boot-device and diag-device must default to the first devices of each type that would be encountered by a search of the device tree. R1--5. The search order for the boot-device and diag-device defaults must be floppy, cdrom, tape, disk, network. R1--6. boot-file must default to <null>. R1--7. diag-file must default to diag. Note: Requirement provides a method to invoke stand-alone diagnostics or to start reinstallation without going through the menus. Requirement provides a method to boot with on-line diagnostics. Software Implementation Note: Pressing either f5 or f6 at the correct time will cause the contents of diag-file to be set into the “bootargs” property of the chosen node of the device tree. The OS can recognize a diagnostics boot request when it finds the “diag” substring in “bootargs”.
CDROM Boot If the CDROM is the first bootable media found in the devices listed in the bootlist (boot-device strings), the CDROM should boot without having to enter optional file specification information or using the f5 function key normally used for diagnostic boot. This is accomplished by having the appropriate bootinfo.txt file specification in the CDROM entry in the bootlist. R1--1. CDROM entries for the default OF boot-device and diag-device configuration variables must include the standard block device bootinfo.txt file specification as documented in (\ppc\bootinfo.txt).
Tape Boot Boot from tape is defined in .
Network Boot The user selects from a list of network devices on the Multiboot Menu and then selects the boot option. The user may be prompted for network parameters (IP addresses, etc.) which are set as arguments in boot-device by the firmware. If the BOOTP protocol is used, the BOOTREPLY packet contains the network parameters to be used for subsequent transmissions (see for details of this process). R1--1. If network boot is selected, firmware must provide a means for the user to specify or override network parameters.
Service Processor Boot In platforms with a service processor, the user may call for a boot using a local/remote connection to the service processor. The particular port used for this remote session is sent to the firmware in a status message after the service processor finishes POST. The port is identified in the “stdin” and “stdout” properties in the chosen node of the OF device tree.
Console Selection During the boot process, firmware establishes the console to be used for displaying status and menus. The following pseudocode describes the console selection process: R1--1. If a console has been selected during the boot process, firmware must set the “stdin” and “stdout” properties of the chosen node to the ihandles of this console’s input and output devices prior to passing control to the OS.
Boot Retry For boot failures related to firmware trying to access a boot device, it is appropriate for the platform to retry the boot operation, especially in the case of booting from a network device. However, in platforms which have a service processor, there are several other types of detected errors for which a reboot retry may be appropriate; for example, checkstops or loss of communication between firmware and the service processor. To ensure that the user policy is followed, the coordination and counting of retry attempts need to be interlocked between the service processor and boot firmware. The most straightforward way to implement this is to have the boot firmware inform the service processor of all failed boot attempts, and let the service processor initiate the system reset (as it also would for checkstops or hangs). This way the service processor can easily manage the retry count and initiate a service dial-out if the boot retry limit is exceeded. R1--1. Platform Implementation: In platforms with service processors, retry of failed boot operations must be coordinated between boot firmware and the service processor, to ensure correct counting and handling of reboot retries according to the service processor configuration reboot policies.
Boot Failures Failure to boot occurs only when no corresponding file is found which is usable on any device specified in the boot-device, boot-file, diag-device, or diag-file string being used. R1--1. If an error occurs in a boot device preventing boot from that device, and after all defined retries have occurred, the failure must be reported as a POST error. R1--2. If a boot device is physically missing or lacks a boot record (for example, if a CDROM is not present in a CDROM drive), then a POST error must be generated for this case, must not result in the calling out of a boot device as being defective, and must not result in a hardware service repair action to the device. R1--3. In Requirement , if it is not possible for a device to distinguish between an actual device error, as opposed to a missing device or boot record, then a POST error must be generated that indicates the possible causes of the failure to boot from the device, and this POST error must not imply that a hardware service repair action is required for the boot device. Implementation Note: All device errors of the same type may be consolidated into a single POST log entry with multiple location codes listed if needed. This architecture anticipates remote support center notification of hardware errors. It is the intention that only definitive boot device errors will be reported as requiring hardware repair. This is meant to prevent service calls for systems for non-hardware errors such as no tape in a tape drive.
Persistent Memory and Memory Preservation Boot (Storage Preservation Option) Selected regions of storage, or Logical Memory Blocks (LMBs), may be optionally preserved across client program boot cycles. These LMBs are denoted by the presence of the “ibm,preservable” property in their OF device tree /memory node. The client program registers the LMB with the platform using the ibm,manage-storage-preservation RTAS call if it wants the contents of the storage preserved across client boot cycles (see also "Managing Storage Preservations" in specification). The architectural intent of this facility is to enable client programs to emulate persistent storage. This is done by a client program registering preservable LMBs. Then, after a subsequent boot cycle (perhaps due to error or impending power loss) the presence of the “ibm,preserved-storage” property in the /rtas node of the device tree indicates to the client program that it has preserved memory. When the client program detects that it has booted with preserved storage and that it might be necessary to preserve the storage for long term, the client program is responsible for copying the preserved data to long term persistent storage medium, and then clearing the registration of the preserved LMBs to prevent potential corruption of the persistent storage medium due to subsequent failures. Upon reboot after such an operation, the “ibm,request-partition-shutdown” property is provided in the /rtas node with a value of 2, indicating that the client program should save appropriate data and shutdown the partition. Implementation Note: How areas get chosen to be marked as preservable is beyond the scope of this architecture.
Transfer Phase The image is prepared for execution by checking it against certain configuration variables; this may result in a reboot. Once the OS gains control, it may use the CIS interface to learn about the platform contents and configuration. The OS will generally build its own version of this configuration data and may discard the OF code and device tree in order to reclaim the space used by OF. A set of platform-specific functions are provided by Run-Time Abstraction Services (RTAS) which is instantiated by the OS invoking the instantiate-rtas method of the RTAS OF device tree node. R1--1. If any device tree property is presented that contains a phandle value to identify a certain node in the device tree, the device tree node so identified must contain the “ibm,phandle” property, and the value of the “ibm,phandle” property must match the phandle value in the property identifying that node. R1--2. If the “ibm,phandle” property is present in a device tree node, the OS must use this value, and not the phandle value returned by a client interface service, to associate this node with a device tree property that uses a phandle value to identify this node. R1--3. An OS must not assume that the “ibm,phandle” property, if present, corresponds to the phandle used by or returned by OF client interface services. A phandle value passed to a client interface service as an argument must have been obtained by use of a client interface service, and not from a device tree property value. Note: If the “ibm,phandle” property exists, there are two “phandle” namespaces which must be kept separate. One is that actually used by the OF client interface, the other is properties in the device tree making reference to device tree nodes. These requirements are written to maintain backward compatibility with older FW versions predating these requirements; if the “ibm,phandle” property is not present, the OS may assume that any device tree properties which refer to this node will have a phandle value matching that returned by client interface services. It will be necessary to have the OSs ready for this requirement before the firmware implementation.
Run-Time During run-time, the OS has control of the system and will have RTAS instantiated to provide low-level hardware-specific functions.
Termination Termination is the phase during which the OS yields control of the system and may return control to the firmware depending on the nature of the terminating condition.
Power Off If the user activates the system power switch, power may be removed from the hardware immediately (switch directly controls the power supply) or software may be given an opportunity to bring the system down in an orderly manner (power management control of the power switch). If power is removed from the hardware immediately, the OS will lose control of the system in an undetermined state. Any I/O underway will be involuntarily aborted and there is potential for data loss or system damage. A shut-down process prior to power removal is highly recommended. In most power managed systems, power switch activation is fielded as a power management interrupt and the OS (through RTAS) is able to quiesce the system before removing power. The OS may turn off system power using the RTAS power-off function.
Reboot The OS may cause the system to reset and reboot by calling the RTAS system-reboot function.
Firmware R1--1. Platforms must implement OF as defined in . R1--2. The OF User Interface must include the following methods as specified in , Section 7.6:.registers, to, load, go, state-valid and init-program. R1--3. Platforms must implement the Run-Time Abstraction Services (RTAS) as described in . R1--4. OSs must use OF and the RTAS functions to be compatible with all platforms.
OS Installation Installation of OSs will be accomplished through the Multiboot Menu as follows: The system boots or reboots normally; the user enters the Multiboot Menu by one of the methods described herein. The Multiboot Menu presents a list of all installation devices. The user selects “install” and an installation device from the menu; firmware locates the bootinfo object or install image on the selected installation device. Firmware will execute init-program and, if a bootinfo object was found, firmware parses it, replaces the <boot-script> entities with appropriate values and executes the script. The OS gets control and selects the target device. After the install process is determined to be successful, the OS updates variables such as boot-device, boot-file, and boot-command. The OS adds the bootinfo-nnnn configuration variable to the NVRAM common system partition. R1--1. The Multiboot Menu must provide an option for OS installation that lists all possible installation devices. R1--2. After the install process is determined to be successful, the OS must set boot-device, boot-file, and boot-command.
Tape Install The OF definition of installation from tape is defined in .
Network Install Network install follows the same process as network boot with the exception that after installation is complete, the OS will write boot-device with the target device information. R1--1. If network install is selected, firmware must provide a means for the user to override default network parameters.
Diagnostics IBM Power® system platforms may use IBM AIX® Kernel-based stand-alone diagnostics as their multi-OS common diagnostics package. Since AIX will run on other vendors’ platforms which might not have permission to use AIX diagnostics, the “ibm,aix-diagnostics” property indicates that AIX diagnostics are permitted (see "Root Node Properties" in ). R1--1. If AIX diagnostics are supported on a platform, then the firmware for that platform must include the property “ibm,aix-diagnostics” in the root node. Software Implementation Note: Each OS may implement an OS-specific run-time diagnostics package, but should, for purposes of consistency, adhere to the error log formats in .
Platform Class The “ibm,model-class” OF property is defined to classify platforms for planning, marketing, licensing, and service purposes (see ). R1--1. The “ibm,model-class” property must be included in the platform’s root node.
Security Platforms will provide the user with options for a Power On Password (POP) and a Privileged Access Password (PAP) and will have some optional physical security features. R1--1. Platform Implementation: Platforms must provide a Power On Password (POP) capability which, when enforced, controls the user’s ability to power-on and execute the configured boot sequence. R1--2. Platform Implementation: Platforms must provide a Privileged Access Password (PAP) capability which, when enforced, controls the user’s ability to alter the boot sequence using f5/f6, and to enter SMS and the Multiboot Menu. R1--3. Platform Implementation: If the PAP is absent or <NULL>, but the POP is non-<NULL>, then the POP must act as the PAP. R1--4. Platform Implementation: Platforms must accept the PAP as a valid response to a request to enter the POP. R1--5. Platform Implementation: If there is a key switch implemented with a secure position, the system must not complete the boot process regardless of the state of POP and PAP when the switch is in this position. R1--6. Platform Implementation: If a key switch is implemented and the switch is in the maintenance (service) position, the POP and PAP must not be enforced. R1--7. Platform Implementation: Platforms, except for rack mounted systems, must provide a locking mechanism as an option which prevents the removal of the covers. R1--8. Platform Implementation: Platforms, except for rack mounted systems, must provide a tie-down mechanism as an option which prevents the physical removal of the system from the premises. R1--9. Platform Implementation: Passwords and keyswitch positions must be implemented in a manner that makes their values accessible to both OF and the service processor. R1--10. Platform Implementation: The OF configuration variable security-password must be maintained to be equivalent to the Privileged Access Password (PAP). R1--11. Platform Implementation: If the PAP and security-password are absent or <NULL>, security-mode must be set to “none”, otherwise security-mode must be set to “command”. R1--12. Platform Implementation: If security-mode is set to any value other than “none” (such as “command” or “full”), it must be treated as security-mode=command. Platform Implementation Notes: As defined here, the PAP and security-password are stronger than as specified in IEEE 1275 for security-mode = command in that they are required for any command line operations, including go and boot. The PAP and security-password are not required to boot the system with default parameters, however, and in this sense the intent of security-mode = command is achieved. There is currently no implementation of security-mode = full. If a service processor is provided, the requirements relating to passwords are applicable in the service processor environment. Service processor documentation refers to the POP as the General User Password and the PAP as the Privileged User Password.
Endian Support LoPAR platforms operate with either Big-Endian (BE) or Little-Endian (LE) addressing. In Big-Endian systems, the address of a word in memory is the address of the most significant byte (the “big” end) of the word. Increasing memory addresses will approach the least significant byte of the word. In Little-Endian (LE) addressing, the address of a word in memory is the address of the least significant byte (the “little” end) of the word. All data structures used for communicating between the OS and the platform (for example, RTAS and hypervisor calls) are Big-Endian format, unless otherwise designated. R1--1. Platforms must by default operate with Big-Endian addressing. R1--2. Platforms that operate with Little-Endian addressing must make System memory appear to be in Little-Endian format to all entities in the system that may observe that image, including I/O. Platform Implementation Notes: Some hardware (for example, bridges, memory controllers, and processors) may have modal bits to allow those components to be used in platforms which operate in Little-Endian mode. In this case, the hardware or firmware will need to set those bits appropriately. Requirement may have an impact on the processor chosen for the platform.
64-Bit Addressing Support A 64-bit-addressing-capable platform is defined as one capable of supporting System Memory and Memory Mapped I/O (MMIO) configured above 4 GB (greater than 32 bits of real addressing). This means that all hardware elements in the topology down to the Host Bridges are capable of dealing with a real address range greater than 32 bits, and all Host Bridges are capable of providing a translation mechanism for translating 32-bit I/O bus DMA addresses. All platforms compliant with IBM Power Architecture Platform Requirements (PAPR) version 2.3 and beyond are required to be 64-bit-addressing-capable. A 64-bit-addressing-aware OS is an OS that can deal with a real address space larger than 4GB. It must handle the 64-bit processor page table format (required of all OSs), and must understand Host Bridge mechanisms and Host Bridge OF methods for supporting System Memory greater than 4 GB. All OSs compliant with IBM Power Architecture Platform Requirements (PAPR) version 2.3 and beyond are required to be 64-bit-addressing-aware.
Minimum System Requirements This section summarizes the minimum hardware and functionality required for LoPAR compliance. The term portable is used in this document to describe that class of systems that is primarily battery powered and is easily carried by its user. The term personal is used in this document to describe that class of systems that is bound to a specific work area due to its size or power source, and whose use is generally restricted to a single direct user or a small set of users. The term server is used in this document to describe that class of systems that supports a multi-user environment, providing a particular service such as file storage, software repository, or remote processing capability. Each of these classes may have unique requirements due to the way it is used or which OS it generally employs and, for this reason, the requirements in this document my have qualifiers based on the type of system being developed. R1--1. (Requirement moved to ) R1--2. A means of attaching a diskette drive must be provided (may be through a connector or over a network) and the drive must have the following characteristics: Media sense: Implementations must allow polling of the drive up to 100x per second to determine the presence of media in the drive. Must accept media of type: 3.5" 1.44 MB MFM R1--3. A means of attaching a CD-ROM drive must be provided (may be through a connector or over a network) and the drive must have the following characteristics: ISO9600 compliant Supports multi-session R1--4. When a keyboard is provided, it must be capable of generating at least 101 scan codes. R1--5. When a mouse is provided, it must have at least two buttons. R1--6. The capability to generate a tone must be provided on portable and personal platforms, and on server platforms which are not housed in rack enclosures. R1--7. A Real Time Clock (RTC) must be provide which must have the following characteristics: Is non-volatile Runs continuously Has a resolution of at least one second
Options and Extensions Options are features that are covered by this architecture, but are not necessarily required to be present on a given platform. Platforms that implement options are required to conform to the definitions in this architecture, so that an aware OS environment can recognize and support them. Some options may be required on some platforms. Refer to for the disposition of currently defined options, including requirements for implementation of some of these options on some platforms. Note that in this table, “optional” does not mean “not required;” see the description column of the table for more information. An extension is a feature that is added to this architecture and is required on all platforms developed after a specified effective date. Options and extensions will normally need to be dormant or invisible in the presence of a non-aware OS environment. In general, this means that they come up passively; that is, they are initialized to an inactive state and activated by an aware OS. R1--1. Extensions and options must come up passively unless otherwise specified in this architecture. R1--2. Extensions and options that affect the OS interface to the platform must be identified, when present, through some architected means, such as OF device tree properties. It is the responsibility of the product development teams to keep the “usage” columns of up to date, LoPAR Optional Features Option Name Usage Description Base IBM Server Usage Legend : NS = Not Supported; O = Optional (see also Description); OR = Optional but Recommended; R = Required; SD = See Description Symmetrical Multiprocessing (SMP) O R Required on MP platforms. Multiboot O O Required to support multiple versions of an OS. PCI Hot Plug DR O OR See for more information. Logical Resource Dynamic Reconfiguration (LRDR) O OR . Enhanced I/O Error Handling (EEH) OR SD R See and . Requirements for platforms that implement LPAR, regardless of the number of partitions (Requirements and ). Error Injection (ERRINJCT) O R Required of servers which implement the EEH option. Logical Partitioning (LPAR) O R See . Bridged-I/O EEH Support O R EEH support for I/O structures which contain PCI to PCI bridges or PCI Express switches. See . Required if EEH is supported. PowerPC External Interrupt R SD R SD May be virtualized; See . EXTI2C O O See for more information on support of I2C buses. Firmware Assisted NMI (FWNMI) R R . System Parameters R R . Capacity on Demand (CoD) O O . Predictive Failure Sparing O O . Converged Location Codes R R The Converged Location Codes option is required on all platforms being developed. and Requirement . Shared Processor LPAR (SPLPAR) O O . Reliable Command/Response Transport O O . Logical Remote DMA (LRDMA) O O . Interpartition Logical LAN (ILLAN) O O . ILLAN Backup Trunk Adapter O O . ILLAN Checksum Offload Support O O . Checksum Offload Padded Packet Support O O See . Virtual SCSI (VSCSI) O O . Virtual FC (VFC) O O See . Storage Preservation O NS and . Client Vterm O R Required of all platforms that support LPAR, otherwise not implemented. Provides a virtual “Asynchronous” IOA for connecting to a server Vterm IOA, the hypervisor, or HMC (for example, to a virtual console). See for more information. Server Vterm O O Allows a partition to serve a partner partition's client Vterm IOA. NUMA Associativity Information OR OR See . Performance Tool Support O NS Provides access to platform-level facilities for performance tools running in a partition on an LPAR system. See .> MSI (Message Signaled Interrupt) SD SD Required for all platforms that support PCI Express. ILLAN Buffer Size Control O O See . Virtual Management Channel (VMC) O O See . Partition Suspension O O Requires the Logical Partitioning, LRDR, and Update OF Tree options. Partition Hibernation O O Allows a partition to sleep for an extended period; during this time the partition state is stored on secondary storage for later restoration. Requires the Partition Suspension, ILLAN, and VASI options. Partition Migration O O Allows the movement of a logical partition from one platform to another; the source and destination platforms cooperate to minimize the time that the partition is non-responsive. Requires the Partition Suspension, ILLAN, and VASI options. Thread Join O O Allows the multi-threaded caller to efficiently establish a single threaded processing environment. Update OF Tree O O Allows the caller to determine which device tree nodes changed due to a massive platform reconfiguration as happens during a partition migration or hibernation. Virtual Asynchronous Services Interface (VASI) O O Allows an authorized virtual server partition (VSP) to safely access the internal state of a specific partition. See for more details. Requires the Reliable Command/Response Transport option. Virtualized Real Mode Area (VRMA) O O Allows the OS to dynamically relocate, expand, and shrink the Real Mode Area. TC O O Allows the OS to indicate that there is no need to search secondary page table entry groups to determine a page table search has failed. See for more details. Configure Platform Assisted Kernel Dump O O Allows the OS to register and unregister kernel dump information with the platform. I/O Super Page O OR Allows the OS to specify I/O pages that are greater than 4 KB in length. Subordinate CRQ (Sub-CRQ) Transport O O Support for the Subordinate CRQs as needed by some Virtual IOAs. See . Cooperative Memory Over-commitment (CMO) O O The CMO option allows for partition participation in the over-commitment of logical memory by the platform. See . Partition Energy Management (PEM) O O Allows the OS to cooperate with platform energy management. See . Multi-TCE-Table (MTT) O O Support for the Multi-TCE-Table Option. See . Virtual Processor Home Node (VPNH) O O Provides substantially consistent virtual processor associativity in a shared processor LPAR environment. See . IBM Active Memory™ Compression O O Allows the partition to perform active memory compression. Virtual Network Interface Controller (VNIC) O O See . Expropriation Subvention Notification O O Allows OS notification of a cooperative memory overcommitment page fault see . Boost Modes O O Allows the platform to communicate and the availability of performance boost modes along with any ability to manage the same. See Platform Resource Reassignment Notification (PRRN) O O Dynamic DMA Windows (DDW) O O Allows the creation of DMA Windows above 4 GB. See . Universally Unique Partition Identification Option (UUID) O O for information on ibm,partition-uuid. Platform Facilities Option (PFO) O O See , , and for more information. Extended Cooperative Memory Overcommittment (XCMO) O O Introduces additional cooperative memory overcommitment functions see Memory Usage Instrumentation Option (MUI) O O See . Block Invalidate Option O O Allows improved performance for removing page table entries representing a naturally aligned block of virtual addresses. Energy Management Tuning Parameters (EMTP) O O Reports the system Energy Management tuning values. In-Memory Table Translation Option O O Provides support for the system wide Memory Management Unit architecture introduced in POWER ISA 3.0 Hash Page Table Resize Option O O Allows partitions to resize their HPT. See . Coherent Platform Facility O O See .
IBM LoPAR Platform Implementation Requirements The tables in this section detail specific product requirements which are not defined as an “option” in this architecture. The intent is to define base requirements for these products, over and beyond what is specified in and elsewhere in this architecture. In addition, any options that are unique to specific implementations (that is, not general usage), and which do not appear in , are listed in this section. It is the responsibility of the product development teams to keep these tables up to date.
IBM Server Requirements This section talks to the requirements for IBM LoPAR Compliant server platforms. R1--1. For all IBM LoPAR Compliant Platforms: The platform must implement the options marked as “required” in the IBM Server column of and the additional functions as indicated in (that is, the “Base” column of is not sufficient). IBM Server Required Functions and Features Function/Feature Effective Date Description All IOA device drivers EEH enabled or EEH safe 6/2004 Required even for systems running with just one partition.
It is the responsibility of the product development teams to keep up to date.
Behavior for Optional and Reserved Bits and Bytes Behavior of the OSs and platforms for bits and bytes in this architecture that are marked as reserved or optional are defined here. R1--1. Bits and bytes which are marked as “optional” by this architecture and which are not implemented by the platform must be ignored by the platform on a Store and must be returned as 0’s on a Load, including the reserved or optional bits of a partially implemented field. R1--2. Bits and bytes which are marked as “reserved” by this architecture must be ignored by the platform on a Store and must be returned as 0’s on a Load, except that bits that are marked as “reserved” and which were previously defined by the architecture maybe be treated appropriately by legacy hardware (such bits in this architecture will state the value that software must use henceforth). R1--3. Bits and bytes marked as “reserved” must be set to 0 by the OS on a Store, except as otherwise defined by the architecture, and must be ignored on a Load.