An Operating System for Multicore and Clouds: Mechanisms and Implementation
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
summer project pal
Active In SP

Posts: 308
Joined: Jan 2011
29-01-2011, 07:59 PM

An Operating System for Multicore and Clouds: Mechanisms and Implementation
A Seminar Report
Smitha Vas P
Department of Computer Science & Engineering
College of Engineering Trivandrum
Kerala - 695016

Cloud computers and multicore processors are two emerging classes of computational
hardware that have the potential to provide unprecedented compute capacity to the average
user. In order for the user to e ectively exploit all of this computational power, operating sys-
tems (OSes) for these new hardware platforms are needed.Existing multicore operating systems
do not scale to large numbers of cores, and do not support clouds. Consequently, current day
cloud systems push much complexity onto the user, requiring the user to manage individual
Virtual Machines (VMs) and deal with many system-level concerns.fos is a single system image
operating system across both multicore and Infrastructure as a Service (IaaS) cloud systems.
fos tackles OS scalability challenges by factoring the OS into its component system services.
Each system service is further factored into a collection of Internet-inspired servers which com-
municate via messaging. This paper focuses on how to build an operating system which can
service both cloud and multicore computers.

.pdf   An Operating System for Multicore and Clouds Mechanisms and Implementation.pdf (Size: 371.97 KB / Downloads: 58)

Users have progressed from using mainframes to minicomputers to personal computers to
laptops, and most recently, to multicore and cloud computers.In the past, new operating systems
have been written for each new class of computer hardware to facilitate resource allocation,
manage devices, and take advantage of the hardware's increased computational capacity. The
newest classes of computational hardware,multicore and cloud computers, need new operating
systems to take advantage of the increased computational capacity and to simplify users access
to elastic hardware resources.
Cloud computing and Infrastructure as a Service (IaaS) promises a vision of boundless
computation which can be tailored to exactly meet a user's need, even as that need grows or
shrinks rapidly. Thus, through IaaS systems, users should be able to purchase just the right
amount of computing, memory, I/O, and storage to meet their needs at any given time. Unfor-
tunately,current IaaS systems lack system wide operating systems, requiring users to explicitly
manage resources and machine boundaries. Making operating systems scale, designing scalable
internal OS data structures, and managing these growing resources will be a tremendous chal-
lenge. Contemporary OSes designed to run on a small number of reliable cores are not equipped
to scale up to thousands of cores or tolerate frequent errors. The challenges of designing an
operating system for future multicore and manycore processors include scalability, managing
elasticity of demand, managing faults, and the challenge of large system programming.
One solution is to provide a single system image OS, making IaaS systems as easy to
use as multiprocessor systems and allowing the above challenges to be addressed in the OS.
A factored operating system (fos) provides a single system image OS on multicore processors
as well as cloud computers. fos does this in two steps. First, fos factors system services of
a full-featured OS by service. Second, fos further factors and parallelizes each system service
into an Internet-style collection, or
eet, of cooperating servers that are distributed among the
underlying cores and machines. All of the system services within fos, along with the
eet of
servers implementing each service, communicate via message passing, which maps transparently
across multicore computer chips and across cloud computers via networking. For eciency,
when fos runs on shared-memory multicores, the messaging abstraction is implemented using
shared memory.
1.1 Challenges with Current Cloud Systems
Current IaaS systems present a fractured and non-uniform view of resources to the
programmer. IaaS systems such as Amazon's EC2 [3] provision resources in units of virtual
machines (VM). Using virtual machines as a provisioning unit reduces the complexity for the
cloud manager, but without a suitable abstraction layer, this division introduces complexity
for the system user. The user of a IaaS system has to worry about constructing their appli-
cation, system concerns such as con guring and managing communicating operating systems.
Addressing the system issues requires a completely new skill set than those for application
The fractured nature of the current IaaS model extends beyond communication mecha-
nisms to scheduling and load balancing, system administration, I/O devices, and fault tolerance.
For system administration, the user of an IaaS cloud system needs to manage a set of di er-
ent computers,manage user accounts within a machine versus externally via NIS or Kerberos,
manage processes between the machines and keep con guration les and updates synchronized
between machines (cfengine) versus within one machine.Last,faults are accentuated in a VM
environment because the user has to manage cases where a whole VM crashes as a separate
case from a process which has crashed.
Scheduling and load balancing di ers substantially within and between machines as well.
Existing operating systems handle scheduing within a machine, but the user must often build
or buy server load balancers for scheduling across machines. Cloud aggregators and middleware
such as RightScale [4] and Amazon's Cloud Watch Auto Scaling [3] provide automatic cloud
management and load balancing tools, but they are typically application-speci c and tuned to
web application serving.
1.2 Bene ts of a Single System Image
fos proposes to provide a single system image across multicores and the cloud as shown
in Figure 1.
Figure 1: fos provides a single system image across all the cloud nodes
This abstraction can be built on top of VMs which are provided by an IaaS service or directly
on top of a cluster of machines. A single system image has the following advantages over the
ad-hoc approach of managing VMs each running distinct operating system instances:
 Ease of administration Administration of a single OS is easier than many machines.
Speci cally, OS update, con guration, and user management are simpler.
 Transparent sharing Devices can be transparently shared across the cloud. Similarly,
memory and disk on one physical machine can transparently be used on another physical
machine (e.g., paging across the cloud)
 Informed optimizations An OS has local, low-level knowledge, thereby allowing it
to make better, ner-grained optimizations than middleware systems.
 Consistency An OS has a consistent, global view of process management and resource
allocation. Intrinsic load balancing across the system is possible, and so is easy process
migration between machines based on load, which is challenging with middleware systems.
A consistent view also enables seamless scaling, since application throughput can be
scaled up as easily as executing new processes. Similarly, applications have a consistent
communication and programming model whether the application resides inside of one
machine or spans multiple physical machines. Furthermore, debugging tools are uniform
across the system, which facilitates debugging multi-VM applications.
 Fault tolerance Due to global knowledge, the OS can take corrective actions on faults.
fos prototype provides a single system image across multicores and clouds, and includes
a microkernel, messaging layer, naming layer, protected memory management, a local and re-
mote process spawning interface, a le system server, a block device driver server, a message
proxy network server, a basic shell, a webserver, and a network stack.

Cloud computing infrastructure and manycore processors present many common chal-
lenges with respect to the operating system.This section introduces the main problems OS
designers will need to address in the next decade. fos, seeks to address these challenges in a
solution that is suitable for both multicore and cloud computing.
2.1 Scalability
The number of transistors which t onto a single chip microprocessor is exponentially
increasing. In order to turn increasing transistor resources into exponentially increasing per-
formance, microprocessor manufacturers have turned to integrating multiple processors onto
a single die. Current OSes were designed for single processor or small number of processor
systems. The current multicore revolution promises drastic changes in fundamental system
architecture, primarily in the fact that the number of general-purpose schedulable processing
elements is drastically increasing. Therefore multicore OSes need to embrace scalability and
make it a rst order design constraint.The scalability limitations of contemporary OS design
include locks, locality aliasing, and reliance on shared memory[2].
Concurrent with the multicore revolution, cloud computing and IaaS systems have been
gaining popularity. The number of computers being added by cloud computing providers has
been growing at a vast rate, driven largely by user demand for hosted computing platforms.
The resources available to a given cloud user are much higher than are available to the non-
cloud user. Cloud resources are virtually unlimited for a given user, only restricted by monetary
constraints. Thus, it is clear that scalability is a major concern for future OSes in both single
machine and cloud systems.
2.2 Variability of Demand
Elasticity of resources can be de ned as the aspect of a system where the available
resources can be changed dynamically over time. Manycore systems provide a large number
of general purpose, schedulable cores. Furthermore, the load on a manycore system translates
into number of cores being used. Thus the system must manage the number of live cores to
match the demand of the user. Therefore, multicore OSes need to manage the number of live
cores which is in contrast to single core OSes which only have to manage whether a single core
is active or idle.
In cloud systems, user demand can grow much larger than in the past. Additionally,
this demand is often not known ahead of time by the cloud user. It is often the case that users
wish to handle peak load without over-provisioning. In contrast to cluster systems where the
number of cores is xed, cloud computing makes more resources available on-demand than was
ever conceivable in the past.
A major commonality between cloud computing and multicore systems is that the de-
mand is not static. Furthermore, the variability of demand is much higher than in previous
systems and the amount of available resources can be varied over a much broader range in
contrast to single-core or xed-sized cluster systems.
2.3 Faults
Managing software and hardware faults is another common challenge for future multicore
and cloud systems. In multicore systems, hardware faults are becoming more common. As the
hardware industry is continuously decreasing the size of transistors and increasing their count
on a single chip, the chance of faults is rising. With hundreds or thousands of cores per chip,
system software components must gracefully support dying cores and bit
ips. In this regard,
fault tolerance in modern OSes designed for multicore is becoming an essential requirement.
In addition, faults in large-scale cloud systems are common. Cloud applications usually
share cloud resources with other users and applications in the cloud. Although each user's
application is encapsulated in a virtual container (for example, a virtual machine in an EC2
model), performance interference from other cloud users and applications can potentially impact
the quality of service provided to the application.
Programming for massive systems is likely to introduce software faults. Due to the
inherent diculty of writing multithreaded and multiprocess applications, the likelihood of
software faults in those applications is high. Furthermore, the lack of tools to debug and
analyze large software systems makes software faults hard to understand and challenging to x.
In this respect, dealing with software faults is another common challenge that OS programming
for multicore and cloud systems share.
2.4 Programming Challenges
Contemporary OSes which execute on multiprocessor systems have evolved from unipro-
cessor OSes. This evolution was achieved by adding locks to the OS data structures. There are
many problems with locks, such as choosing correct lock granularity for performance,reasoning
about correctness, and deadlock prevention [2]. Ultimately, programming ecient large-scale
lock-based OS code is dicult and error prone.
Developing cloud applications composed of several components deployed across many
machines is a dicult task. The prime reason for this is that current IaaS cloud systems
impose an extra layer of indirection through the use of virtual machines. Whereas on mul-
tiprocessor systems the OS manages resources and scheduling,on cloud systems much of this
complexity is pushed into the application by fragmenting the application's view of the resource
pool.Furthermore, there is not a uniform programming model for communicating within a sin-
gle multicore machine and between machines.The current programming model requires a cloud
programmer to write a threaded application to use intra-machine resources while socket pro-
gramming is used to communicate with components of the application executing on di erent
In addition to the diculty of programming these large-scale hierarchical systems, man-
aging and load-balancing these systems is proving to be a daunting task as well. Ad-hoc
solutions such as hardware load-balancers have been employed in the past to solve such issues.
These solutions are often limited to a single level of the hierarchy (at the VM level). In the
context of fos, however, this load balancing can be done inside the system, in a generic manner
(i.e. one that works on all messaging instead of only TCP/IP trac) and on a ner granu-
larity than at the VM or single machine level. Furthermore, with this design, the application
developer need not be aware of such load balancing.
fos is an operating system which takes scalability and adaptability as the rst order
design constraints. fos ventures to develop techniques and paradigms for OS services which
scale from a few to thousands of cores. In order to achieve the goal of scaling over multiple
orders of magnitude in core count, fos uses the following design principles:
 Space multiplexing replaces time multiplexing Due to the growing bounty of cores,
there will soon be a time where the number of cores in the system exceeds the number
of active processes. At this point scheduling becomes a layout problem, not a time-
multiplexing problem. The operating system will run on distinct cores from the appli-
cation. This gives spatially partitioned working sets; the OS does not interfere with the
application's cache.
 OS is factored into function-speci c services, where each is implemented as
a parallel, distributed service In fos, services collaborate and communicate only via
messages, although applications can use shared memory if it is supported. Services are
bound to a core, improving cache locality. Through a library layer, libfos, applications
communicate to services via messages.
 OS adapts resource utilization to changing system needs The utilization of
active services is measured, and highly loaded services are provisioned more cores (or
other resources). The OS closely manages how resources are used.
 Faults are detected and handled by OS OS services are monitored by watchdog
process.If a service fails, a new instance is spawned to meet demand, and the naming
service reassigns communication channels.
The following sections highlight key aspects of the fos architecture,shown in Figure 2.
Figure 2: An overview of the fos server architecture, highlighting the cross-machine interaction
between servers in a manner transparent to the application. In scenario (a), the application
is requesting services from \fos Server a" which happens to be local to the application. In
scenario (b), the application is requesting a service which is located on another machine.
In the gure, fos runs on an IaaS system on top of a hypervisor. A small microkernel
runs on every core, providing messaging between applications and servers. The global name
mapping is maintained by a distributed set of proxy-network servers that also handle inter-
machine messaging. A small portion of this global namespace is cached on-demand by each
microkernel. Applications communicate with services through a library layer (libfos), which
abstracts messaging and interfaces with system services.
3.1 Microkernel
fos is a microkernel operating system. The fos microkernel executes on every core in
the system. fos uses a minimal microkernel OS design where the microkernel only provides a
protected messaging layer, a name cache to accelerate message delivery, basic time multiplexing
of cores, and an Application Programming Interface (API) to allow the modi cation of address
spaces and thread creation. All other OS functionality and applications execute in user space.
OS system services execute as userland processes, but may possess capabilities to communicate
with other system services which user processes do not.
Capabilities are extensively used to restrict access into the protected microkernel. The
memory modi cation API is designed to allow a process on one core to modify the memory
and address space on another core if appropriate capabilities are held. This approach allows
fos to move signi cant memory management and scheduling logic into userland space.
3.2 Messaging
fos provides a simple process-to-process messaging API for inter-process communication
and synchronization. There are several key advantages to using messaging for this mechanism.
One advantage is the fact that messaging can be implemented on top of shared memory, or
provided by hardware, thus allowing this mechanism to be used for a variety of architectures.
Another advantage is that the sharing of data becomes much more explicit in the programming
model, thus allowing the programmer to think more carefully about the amount of shared
data between communicating processes. By reducing this communication, better encapsulation
as well as scalability is achieved, both are desirable traits for a scalable cloud or multicore
operating system.
fos allows conventional multithreaded applications with shared memory. This is in order
to support legacy code as well as a variety of programming models. However, operating system
services are implemented strictly using messages. This is done to force careful thought about
which data are shared to improve scalability.
Using messaging is also bene cial in that the abstraction works across several di erent
layers without concern from the application developer. When one process wishes to commu-
nicate with another process it uses the same mechanism for this communication regardless of
whether the second process is on the same machine or not. Existing solutions typically use a
hierarchical organization where intra-machine communication uses one mechanism while inter-
machine communication uses another, often forcing the application developer to choose a-priori
how they will organize their application around this hierarchy. By abstracting this commu-
nication mechanism, fos applications can simply focus on the application and communication
patterns on a
at communication medium, allowing the operating system to decide whether or
not the two processes should live on the same VM or not. Additionally, existing software sys-
tems which rely on shared memory are also relying on the consistency model and performance
provided by the underlying hardware.
fos messaging works intra-machine and across the cloud, but uses di ering transport
mechanisms to provide the same interface. On a shared memory multicore processor, fos uses
message passing over shared memory. When messages are sent across the cloud,messages are
sent via shared memory to the local proxy server which then uses the network (e.g., Ethernet)
to communicate with a remote proxy server which then delivers the message via shared memory
on the remote node. Each process has a number of mailboxes that other processes may deliver
messages to provided they have the credentials. fos presents an API that allows the application
to manipulate these mailboxes and their properties. An application starts by creating a mailbox.
Once the mailbox has been created, capabilities are created which consist of keys that may be
given to other servers allowing them to write to the mailbox. In addition to mailbox creation
and access control, processes within fos are also able to register a mailbox under a given
name. Other processes can then communicate with this process by sending a message to that
name and providing the proper capability. The fos microkernel and proxy server assume the
responsibility of routing and delivering messages regardless of whether or not a message crosses
machine boundaries.
3.3 Naming
One unique approach to the organization of multiple communicating processes that fos
takes is the use of a naming and lookup scheme. Processes are able to register a particular name
for a mailbox. This namespace is a hierarchical URI much like a web address or lename. This
abstraction provides great
exibility in load balancing and locality to the operating system.
The basic organization for many of fos's servers is to divide the service into several
independent processes (running on di erent cores) all capable of handling the given request.
As a result, when an application messages a particular service, the nameserver will provide a
member of the
eet that is best suited for handling the request. To accomplish this, all of the
servers within the
eet register under a given name. When a message is sent, the nameserver
will provide the server that is optimal based on the load of all of the servers as well as the
latency between the requesting process and each server within the
When multiple servers want to provide the same service, they can share a name. One
solution to route the message to the correct server, is to have a few xed policies such as round
robin or closest server. Alternatively, custom policies could be set via a callback mechanism or
complex load balancer. Meta-data such as message queue lengths can be used to determine the
best server to send a message to. The advantage to this design is that much of the complexity
dealing with separate forms of inter-process communication in traditional cloud solutions is
abstracted behind the naming and messaging API. Each process simply needs to know the
name of the other processes it wishes to communicate with, fos assumes the responsibility of
eciently delivering the message to the best suited server within the
eet providing the given
3.4 OS Services
A primary challenge in both cloud computing and multicore is the unprecedented scale of
demand on resources, as well as the extreme variability in the demand. System services must be
both scalable and elastic, or dynamically adaptable to changing demand. This requires resources
to shift between di erent system services as load changes. fos addresses these challenges by
parallelizing each system service into a
eet of spatially-distributed, cooperating servers. Each
service is implemented as a set of processes that, in aggregate, provide a particular service.
Fleet members can execute on separate machines as well as separate cores within a machine.
This improves scalability as more processes are available for a given service and improves
performance by exploiting locality. Fleets communicate internally via messages to coordinate
state and balance load. There are multiple
eets active in the system: e.g., a le system
a naming
eet, a scheduling
eet, a paging
eet, a process management
eet, etc.
When demand for a service outstrips its capabilities, new members of the
eet are added
to meet demand. This is done by starting a new process and having it handshake with existing
members of the
eet. In some cases, clients assigned to a particular server may be reassigned
when a new server joins a
eet. This can reduce communication overheads or lower demand
on local resources (e.g., disk or memory bandwidth). Similarly, when demand is low, processes
can be eliminated from the
eet and resources returned to the system. This can be triggered
by the
eet itself or an external watchdog service that manages the size of the
fos provides (i) a cooperative multithreaded programming model; (ii) easy-to-use remote
procedure call (RPC) and serialization facilities; and (iii) data structures for common patterns
of data sharing.
3.4.1 fos Server Model
fos provides a server model with cooperative multithreading and RPC semantics. The
goal of the model is to abstract calls to independent, parallel servers to make them appear as
local libraries, and to mitigate the complexities of parallel programming. The model provides
two important conveniences: the server programmer can write simple straight-line code to
handle messages, and the interface to the server is simple function calls.
Servers are event-driven programs, where the events are messages. Messages arrive on
one of three inbound mailboxes: the external (public) mailbox, the internal (
eet) mailbox,
and the response mailbox for pending requests. To avoid deadlock, messages are serviced in
reverse priority of the above list.
New requests arrive on the external mailbox. The thread that receives the message
is now associated with the request and will not execute any other code. The request may
require communication with other servers (
eet members or other services) to be completed.
Meanwhile, the server must continue to service pending requests or new requests. The request
is processed until completion or a RPC to another service occurs. In the former case, the thread
terminates. In the latter, the thread yields to the cooperative scheduler, which spawns a new
thread to wait for new messages to arrive.
Requests internal to the
eet arrive on the internal mailbox. These deal with maintain-
ing data consistency within the
eet, load balancing,or growing/shrinking of the
eet. Other-
wise,they are handled identically to requests on the external mailbox. They are kept separate
to prevent others from spoo ng internal messages and compromising the internal state of the
Requests on the response mailbox deal with pending requests. Upon the receipt of such
a message, the thread that initiated the associated request is resumed.
The interface to the server is a simple function call. The desired interface is speci ed
by the programmer in a header le, and code is generated to serialize these parameters into
a message to the server. Likewise, on the receiving end, code is generated to deserialize the
parameters and pass them to the implementation of the routine that runs in the server. On the
caller side, the thread that initiates the call yields to the cooperate scheduler. When a response
arrives from the server, the cooperative scheduler will resume the thread. The cooperative
scheduler runs whenever a thread yields. If there are threads ready to run (e.g., from locking),
then they are scheduled. If no thread is ready, then a new thread is spawned that waits on
messages. If threads are sleeping for too long, then they are resumed with a time out error
3.4.2 Parallel Data Structures
One key aspect to parallelizing operating system services is managing state associated
with a particular service amongst the members of the
eet.The idea is to provide a com-
mon container interface, which abstracts several implementations that provide di erent consis-
tency,replication, and performance properties.In this solution, the operating system and support
libraries provide an interface for storing and retrieving data. On the back-end, each particular
server stores some of the data (acting as a cache) and communicates with other members of the

eet to access state information not homed locally. Special care needs to be taken to handle
joining and removing servers from a
eet. By using a library provided by the operating system
and support libraries, the code to manage this distributed state can be tested and optimized,
alleviating the application developer from concerning themselves with the implementation of
distributed data structures.
The key components of fos are illustrated here.It also describes how fos works and how fos
solves key challenges in the cloud.
4.1 File System
Figure 3: Anatomy of a File System Access
An example of the interaction between the di erent servers in fos is the fos le server.
Figure 3 depicts an anatomy of a le system access in fos. In this gure, the application client,
the fos le system server and the block device driver server are all executing on distinct cores to
diminish the cache and performance interferences among themselves. Since the communication
between the application client and systems servers, and amongst the system servers, is via
the fos messaging infrastructure, proper authentication and credential veri cation for each
operation is performed by the messaging layer in the microkernel. All services are assumed to
be on the same machine, however the multi-machine case is a logical extension to this example,
with a proxy server bridging the messaging abstraction between the two machines.
fos intercepts the POSIX le system calls in order to support compatibility with legacy
POSIX applications. It bundles the POSIX calls into a message and sends it to the le system
server. The microkernel determines the destination server of the message and veri es that the
client application possesses the requisite capabilities to communicate with the server. It, then
looks up the destination server in its name cache and determines which core it is executing
on. If the server is a local server (i.e. executing on the same machine as the application), the
microkernel forwards the message to the destination application.
In Figure 3, fos intercepts the application File system access in step 1. In step 2, system call is
bundled in a message to be sent via the messaging layer. In step 3,since the destination server
for this message is the le system server, fos queries the name cache and sends the message to
the destination core.
Once the le system server receives a new message in its incoming mailbox queue, it services
the request. If the data requested by the application is cached, the server bundles it into
a message and sends it back to the requesting application. Otherwise, it fetches the needed
sectors from disk through the block device driver server.
In the le system anatomy gure, step 5 represents the bundling of the sectors request into
block messages while step 6 represents the look-up of the block device driver in the name cache.
Once the server is located, the fos microkernel places the message in the incoming mailbox
queue of the block device driver server as shown in step 6.
The block device driver server provides Disk I/O operations and access to the physical
disk. In response to the incoming message, the block device driver server processes the request
enclosed in the incoming message, fetches the sectors from disk as portrayed in steps 7, 8 and
9 respectively in the gure. Afterward, it encapsulates the fetched sectors in a message and
sends it back to the le system server, as shown in steps 10, 11 and 12. In turn, the le server
processes the acquired sectors from the incoming mailbox queue, encapsulates the required data
into messages and sends them back to the client application. In the client application, libfos
receives the data at its incoming mailbox queue and processes it in order to provide the le
system access requested by the client application. These steps are all represented by steps 13
through 15 in the le system access anatomy in Figure 3.
Libfos provides several functions, including compatibility with POSIX interfaces. The
user application can either send the le system requests directly through the fos messaging
layer or through libfos. In addition, if the le system server is not running on the local machine
(i.e. the name cache could not locate it), the message is forwarded to the proxy server. The
proxy server has the name cache and location of all the remote servers. In turn, it determines
the appropriate destination machine for the message, bundles it into a network message and
sends it via the network stack to the designated machine.
4.2 Spawning Servers
To expand a
eet by adding a new server, one must rst spawn the new server process. As
shown in Figure 4, spawning a new process take into account the machine on which the process
should be spawned. Spawning begins with a call to the spawnProcess() function; this arises
through an intercepted exec syscall from the POSIX compatibility layer, or by directly calling
the spawnProcess function by a fos-aware application. By directly calling the spawnProcess
function, parent processes can exercise greater control over where their children are placed by
specifying constraints on what machine to run on, what kinds of resources the child will need,
and locality hints to the scheduler.
The spawnProcess function bundles the spawn arguments into a message, and sends
that message to the spawn servers incoming request mailbox. The spawn server must rst
determine which machine is most suitable for hosting that process. It makes this decision by
considering the available load and resources of running machines, as well as the constraints
given by the parent process in the spawnProcess call. The spawn server interacts with the
scheduler to determine the best machine and core for the new process to start on. If the best
machine for the process is the local machine,the spawn server sets up the address space for the
new process and starts it. The spawn server then returns the PID to the process that called
spawnProcess by responding with a message. If the scheduler determined that spawning on a
remote machine is best, the spawn server forwards the spawn request to the spawn server on
the remote machine, which then spawns the process.
Figure 4: Spawning a VM
If the local spawn server was unable to locate a suitable machine to spawn the process,
it will initiate the procedure of spawning a new VM. To do this, it sends a message to the
cloud interface server, describing what resources the new machine should have;when the cloud
interface server receives this message, it picks the best type of VM to ask for. The cloud interface
server then spawns the new VM by sending a request to the cloud manager via Internet requests
(the server outside of fos which is integrated into the underlying cloud infrastructure eg. EC2).
When the cloud manager returns the VM ID, the cloud interface server waits until the new VM
acquires an IP address. At this point, the cloud interface server begins integration of the new
VM into the fos single system image.
The newly-booted VM starts in a bare state, waiting for the spawner VM to contact
it. The cloud interface server noti es the local proxy server that there is a new VM at the
given IP address that should be integrated into the system, and the proxy server then connects
to the remote proxy server at that IP and initiates the proxy bootstrap process. During the
bootstrap process, the proxy servers exchange current name mappings, and notify the rest of
the machines that there is a new machine joining the system. When the local proxy server
nishes this setup, it responds to the cloud interface server that the VM is fully integrated.
The cloud interface server can then respond to the local spawn server to inform it that there
is a new machine that is available to spawn new jobs, which then tells all the spawn servers in
eet that there is a new spawn server and a new machine available. The local spawn server
nally forwards the original spawn call to the remote spawn server on the new VM.
In order to smooth the process of creating new VMs, the spawning service uses a pair of
high- and low-water-marks, instead of spawning only when necessary. This allows the spawning
service to mask VM startup time by preemptively spawning a new VM when the resources are
low but not completely depleted. It also prevents the ping-ponging e ect, where new VMs are
spawned and destroyed unnecessarily when the load is near the new-VM threshold, and gives
the spawn servers more time to communicate with each other and decide whether a new VM
needs to be spawned.
4.3 Elastic Fleet
As key aspects of the design of fos include scalability and adaptability, a
eet grows to
match demand as described here. If, while the system is running, the load changes, then the
system should respond in a way that meets that demand if at all possible. In the context of
a fos
eet, if the load become too high for the
eet to handle requests at the desirable rate,
then a watchdog process for the
eet can grow the
eet. The watchdog does this by spawning
a new member of the
eet and initiating the handshaking process that allows the new server
to join the
eet. During the handshaking process, existing members of the
eet are noti ed of
the new member, and state is shared with the new
eet member. Additionally, the scheduler
may choose to spatially re-organize the
eet so as to reduce the latency between
eet members
and those processes that the
eet is servicing.
If there are many servers on a single machine that are all requesting service look-ups
from the nameserver, the watchdog process may notice that all of the queues are becoming
full on each of the nameservers. It may then decide to spawn a new nameserver and allow
the scheduler to determine which core to put this nameserver on so as to alleviate the higher
load. By using the programming model provided for OS services as well as the parallel data
structures for backing state, many servers can easily enjoy the bene t of being dynamically
scalable to match demand.
While the mechanism for growing the
eet will be generic, there are several aspects of
this particular procedure that will be service speci c. One issue that arises is obtaining the
meta-data required to make this decision and choosing the policy over that meta-data to de ne
the decision boundary. To solve this issue, the actual policy can be provided by members of
The fact that this decision is made by part of the operating system is a unique and
advantageous di erence fos has over existing solutions. In particular, the
eet expansion (and
shrinking) can be a global decision based on the health and resources available in a global sense,
taking into consideration the existing servers, their load and location (latency) as well as desired
throughput or monetary concerns from the system owner. By taking all of this information
into consideration when making the scaling scheduling decision, fos can make a much more
informed decision than solutions that simply look at the cloud application at the granularity
of VMs.

Cloud computing and multicores have created new classes of platforms for application
development; however, they come with many challenges as well. New issues arise with the
fractured resource pools in clouds. Also new OSes need to deal with with a dynamic underlying
computing infrastructure due to varying application demand, faults, or energy constraints. fos,
seeks to surmount these issues by presenting a single system interface to the user and by
providing a programming model that allows OS system services to scale with demand. By
placing key mechanisms for multicore and cloud management in a uni ed operating system,
resource management and optimization can occur with a global view and at the granularity
of processes instead of VMs. fos is scalable and adaptive, thereby allowing the application
developer to focus on application-level problem-solving without distractions from the underlying
system infrastructure.

[1] DavidWentzla ,Charles Gruenwald III,Nathan Beckmann and A. Agarwal. . "An Operating
System for Multicore and Clouds: Mechanisms and Implementation" SoCC '10 Proceedings
of the 1st ACM symposium on Cloud computing
[2] D. Wentzla and A. Agarwal. . "Factored operating systems (fos): the case for a scalable
operating system for multicores" SIGOPS Oper. Syst. Rev., 43(2):76-85, 2009.
[3] Amazon Elastic Compute Cloud (Amazon EC2), 2009. aws.amazonec2/
[4] Rightscale home page. rightscale


Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Load Rebalancing for Distributed File Systems in Clouds seminar tips 3 1,784 13-04-2015, 05:21 PM
Last Post: shilpavpius
  operating sysstem jaseelati 0 141 23-02-2015, 02:25 PM
Last Post: jaseelati
  web operating system seminar jaseelati 0 308 17-02-2015, 02:20 PM
Last Post: jaseelati
  eyeOS cloud operating system full report seminar topics 8 11,427 24-03-2014, 02:49 PM
Last Post: seminar project topic
  Going Back and Forth: Efficient Multideployment and Multisnapshotting on Clouds pdf seminar projects maker 0 278 24-09-2013, 04:00 PM
Last Post: seminar projects maker
  An Introduction to UNIX operating system. study tips 0 378 09-09-2013, 03:14 PM
Last Post: study tips
Last Post: study tips
  LIBRARY MANAGEMENT SYSTEM: DESIGN AND IMPLEMENTATION pdf study tips 1 756 29-08-2013, 02:38 PM
Last Post: study tips
  An Introduction to UNIX operating system study tips 0 293 24-08-2013, 04:11 PM
Last Post: study tips
  Implementation of Heterogeneous and Homogenous Distributed Databases Report study tips 0 324 31-07-2013, 12:48 PM
Last Post: study tips