Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Computer Science Clay
Active In SP

Posts: 712
Joined: Jan 2009
14-06-2009, 01:20 AM


Seminar Report
Transactional Memory
Department of Computer Science
This is to certify that, this seminar and presentation report entitled ËœTRANSCTIONAL MEMORYâ„¢
Submitted by DEEPIKA M P in partial fulfillment of the requirements for the award
Master of Technology in SOFTWARE ENGINEERING of Cochin University of Science
and Technology is a record of seminar and presentation presented by her in the department of computer
science during the year 2008-2010
Seminar Coordinator
Dr.Sumam Marry Idikula
Dr.K.Poulose Jacob
ProfessorPage 3

Seminar Report
Transactional Memory
Department of Computer Science
I express my deep sense of gratitude to D.r.K.Poulose Jacob, HOD, Department of
Computer Science, CUSAT, for providing an opportunity to present this seminar and presentation.
My profound gratitude to Dr.Sumam Marry Idikula for her valuable guidance and
timely support she offered. My sincere thanks to all other teaching staff, especially,
Mr.Santhosh Kumar and all non teaching staff for their generous help and guidance. I also
express my friendly gratitude to my batch mates for their kind co-operation and good wishes.
Finally, I thank the God Almighty, without whose blessings nothing starts good or
ends well.Page 4

Seminar Report
Transactional Memory
Department of Computer Science
A primary challenge of parallel programming is to find better abstractions for expressing
parallel computation and for writing parallel programs. Parallel programming actually
encompasses all of the difficulties of sequential programming, but also introduces the hard
problem of coordinating interactions among concurrently executing tasks. Today, most parallel
programs employ low level programming constructs likes threads and explicit synchronization
like locks, semaphores, monitors to coordinate thread execution. Considerable experience has
shown that parallel programs written with these constructs are difficult to design, program,
debug and maintain.
Transactional memory™ is a new programming construct that offers a high-level
abstraction for writing parallel program. Mr.Lomet Proposed TM and it is practically
implemented by Maurice Herlihy.
Itâ„¢s a concurrency control mechanism analogous to database transactions for controlling
access to shared memory in concurrent computing. It functions as an alternative to lock based
synchronization and is typically implemented in a lock free way.Page 5

Seminar Report
Transactional Memory
Department of Computer Science
1. INTRODUCTION¦¦¦¦¦¦¦¦¦¦¦¦¦¦
a) STM
b) HTM
5. ADVANTAGES¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦
6. DISADVANTAGES¦¦¦¦¦¦¦¦¦¦¦¦¦
7. CONCLUSION¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦
8. REFERENCES¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦
-26-Page 6

Seminar Report
Transactional Memory
Department of Computer Science
As computers evolve, programming changes as well. The past few years mark the start of a
historic transition from sequential to parallel computation in the processors used in most
personal, server, and mobile computers. This shift marks the end of a remarkable 30-year period
in which advances in semiconductor technology and computer architecture improved the
performance of sequential processors at an annual rate of 40%-50%.this steady performance
increase benefited all software, and this progress was a key factor driving the spread of software
throughout modern life.
This remarkable era stopped when practical limits on the power dissipation of a chip ended
the continual increases in clock speed and limited instruction “level parallelism diminished the
benefit of increasingly complex processor architectures. The era did not stop because Mooreâ„¢s
Law ended. Semiconductor technology is still capable of doubling the transistors on a chip
every two years. However, this flood of transistors now increases the number of independent
processors on a chip, rather than making an individual processor run faster. The resulting
computer architecture, named multicore, consists of several independent processors (cores) on a
chip that communicate through shared memory. Today, two core chips are coming to market,
and there e is every reason to believe that the number of core will continue to double for a
number of generations. On one hand, the good news is that the peak performance of a multicore
computer doubles each time the number of cores doubles.Page 7

Seminar Report
Transactional Memory
Department of Computer Science
On the other hand, achieving this performance requires a program execute in parallel and scale
as the number of processors increases.
Few programs today are written to exploit parallelism effectively. In part, most
programmers did not have access to parallel computers, which were limited to domains with
large, naturally parallel workloads, such as servers, or huge computations, such as high
performance computing. Because main stream programming was sequential programming, most
existing programming languages, libraries, design patterns and training do not address the
challenges of parallel programming. Obviously, this situation must change before programmers
in general will start writing parallel programs for multicore processors.
A primary challenge of parallel programming is to find better abstractions for expressing
parallel computation and for writing parallel programs. Parallel programming actually
encompasses all of the difficulties of sequential programming, but also introduces the hard
problem of coordinating interactions among concurrently executing tasks. Today, most parallel
programs employ low level programming constructs that are just a thin veneer over the
underlying hardware. These constructs consist of threads and explicit synchronization (like
locks, semaphores, monitors) to coordinate thread execution. Considerable experience has
shown that parallel programs written with these constructs are difficult to design, program,
debug and maintain.Page 8

Seminar Report
Transactional Memory
Department of Computer Science
Transactional memory „¢-proposed by Lomet and first practically implemented by Herlihy
and Moss is a new programming construct that offers a higher level abstraction for writing
parallel programs. In the past few years, it has engendered considerable interest, as transactions
have long been used in database to isolate concurrent activities. TM offers a
mechanism that allows portions of a program to execute in isolation, without regard to other,
con currently executing tasks. A programmer can reason about the correctness of code within a
transaction and need not worry about complex interaction with other, concurrently executing
parts of the program. TM offers a promising, but as yet unproven mechanism to improve
parallel programming.
What is transactional memory?
Itâ„¢s a concurrency control mechanism analogous to database transactions for controlling
access to shared memory in concurrent computing. It functions as an alternative to lock based
synchronization and is typically implemented in a lock free way.Page 9

Seminar Report
Transactional Memory
Department of Computer Science
Database transaction is a unit of work performed against a management system or similar
system that is treated in coherent and reliable way independent of other transactions. A database
transaction, by definition must be atomic, consistent, isolated and durable. These properties are
usually called ACID properties.
DurabilityPage 10

Seminar Report
Transactional Memory
Department of Computer Science
Atomicity:-Either the effect of all or none of its operation remains when a transaction is
completed-in other words, to the outside world the transaction appears to be indivisible.
Consistency:-Every transaction must leave the database in consistent state.
Isolation:-Transaction cannot interfere with each other .Actually it is the main goal of
concurrency control.
Durability:-Successful transaction must persist through crashes.
From the rules it is clear that transactions provide an all-or-nothing proposition stating that
work units performed in a database must be completed in their entirety or take no effect
whatsoever. Further, transactions must be isolated from other transactions, results must conform
to existing constraints in the database and transactions that complete successfully must be
committed to durable storage.
In this context, we can say a transaction is a piece of code that executes a series of reads and
writes to shared memory. These reads and writes logically occur at a single instant in time;
intermediate states are not visible to other (successful) transactions.
TM provides lightweight transactions for threads running in a shared address space. TM
ensures the atomicity and isolation of concurrently executing tasks. (In general TM does not
provide consistency or durability.)Atomicity ensures program state changes effected by codes
executing in transaction are indivisible from the perspective of other concurrently executingPage 11

Seminar Report
Transactional Memory
Department of Computer Science
tasks. In other words, although the code in a transaction may modify individual variables through
a series of assignments, another computation can only observe the program state
immediately before or after the transaction executes. Isolation ensures that concurrently executing
task can not affect the result of a transaction, so a transaction produces the same answer as when
no other task was executing. Transactions provide a basis to construct parallel abstractions, which
are building blocks that can be combined without knowledge of their internal details, much as
procedures and objects provide compassable abstractions for sequential code.Page 12

Seminar Report
Transactional Memory
Department of Computer Science
A programming model provides a rational e for the design of programming language
constructs and guidance on how to construct programs. Like many aspects of TM, its
programming model is still the subject of active investigation.
Most TM systems provide simple atomic statements that execute a block of code (and the
routines it invokes) as a transaction. An atomic block isolates the code from concurrently
executed threads, but a block is not a replacement for general synchronization such as
semaphores or condition variables. In particular, atomic block by themselves do not provide a
mean to coordinate code running on parallel threads.
Automatic mutual exclusion (AME), by contrast, turns the transactional model inside-out
by executing most of a program in transactions. AME supports asynchronous programming, in
which a function starts one or more asynchronous computations and later rendezvoused to
retrieve their results. This programming model is common way to deal with unpredictable
latency in user directed and distributed systems. The atomicity provided by transactions ensures
that an asynchronous computation, which executes at an unpredictable rate, does not interfere
with other, simultaneously active computations.Page 13

Seminar Report
Transactional Memory
Department of Computer Science
TM can be implemented entirely in software (STM) or with specialized hardware support
(HTM). Many different implementation techniques have been proposed, and this paper, rather
than surveying the literature, focuses on a few key techniques.
Most TM systems of both types implement optimistic concurrency control in which a
transaction executes under the assumption that it will not conflict with another transaction. If
two transactions conflict, because one modifies a location read or modified by the other, the TM
system aborts one of the transaction by reversing (rolling back) its side effects. The alternative
pessimistic concurrency control requires a transaction to establish exclusive access to a location
(for example, by acquiring a lock) before modifying it. This approach also may abort and roll
back a transaction, in case of deadlock.
The initial paper on STM by shavit and Touitou showed it was possible to implement lock-
free, atomic, multi-location operation entirely in software, but it required a program to declare
in advance the memory locations to be accessed by a transaction.
Herlihy et alâ„¢s Dynamic STM (DSTM) was the first STM system that did not require a
program to declare the memory locations accessed by a transaction. DSTM is an object-
granularity, deferred-update STM system, which means that a transaction modifies a privatePage 14

Seminar Report
Transactional Memory
Department of Computer Science
copy of an object and only makes its changes visible to other transactions when it commits. The
transaction exclusively accesses the copy without synchronization. However, another
transaction can access the original, underlying object while the first transaction is still running,
which causes a logical conflict that the STM system detects and resolve s by aborting one of the
two transactions.
STM systems can a conflict when a transaction first accesses an object (early detection) or
when the transaction attempts to commit (late detection). Both approaches yield the same
results, but may perform differently and, unfortunately, neither are consistently superior. Early
detection prevents a transaction from performing unnecessary computation that a subsequent
abort will discard. Late detection can avoid unnecessary aborts, as when the conflicting
transaction itself aborts because of a conflict with a third transaction.
Another complication is a conflict between a transaction that only reads an object and another
that modifies the object and another that modifies the object. Since reads are more common
than writes, STM systems only clone objects that are modified. To reduce overhead, a
transaction tracks the objects it reads and, before it commits, ensures that no other transaction
modified them.
DSTM is a library. An object manipulated in a transaction is first registered with the DSTM
system, which returns a TMobject wrapper for the object (as illustrated in the accompanying
figure). Subsequently, the code executing the transaction can open the TMobject for read-only
or read write access, which returns a pointer to the original or cloned object, respectively.Page 15

Seminar Report
Transactional Memory
Department of Computer Science
Either way, the transaction manipulates the object directly, with out further synchronization.
A transaction ends when the program attempts to commit the transactionâ„¢s changes. If the
transaction successfully commits, the DSTM system atomically replaces, for all modified
objects, the old object in a Locator structure with its modified version.
A transaction T can commit successfully if it meets two conditions. The first is that no
concurrently executing transaction modified an object read by T.DSTM tracks the objects a
transaction opened for reading and validates the entries in this read set when the transaction
attempts to commit. An object in the read set is valid if its version is unchanged since
transaction T first opened it. DTSM also validate the read set every time it opens an object, to
avoid allowing atransaction to continue executing in an erroneous program state in which some
objects changed after the transaction started execution.
The second condition is that transaction T is not modifying an object that another transaction
is also modifying. DSTM prevents this type of conflict by only allowing one transaction to open
an object for modification. When a write-write conflict occurs, DSTM aborts one of the two
conflicting transactions and allows the other to proceed. DSTM rolls the aborted transaction
back to its initial state and then allow it to re-execute. The policy used to select which
transaction to abort can affect system performance, including live ness, but it should have no
effect on the semantics of the STM system.Page 16

Seminar Report
Transactional Memory
Department of Computer Science
Object new Version
Object old Version
(TMobject is a handler for the object. It points to a locator, which in turn points to the
transaction that opened the object, the original (old) version of the object, and the
transactionâ„¢s private (new) clone of the object.)
TM Object
New Data
Old Data
StatusPage 17

Seminar Report
Transactional Memory
Department of Computer Science
The performance of DSTM, like other STM systems, depends on the details of the workload. In
general, the large overheads of STM systems are more expensive than locking on a small
number of processors. However, as the number of processors increase, so does the contention
for a lock and the cost of locking. When this occurs and conflicts are rare, STM have been
shown to outperform locks on small benchmarks.
Other deferred update STM systems investigated alternative implementation techniques.
Harris and Fraserâ„¢s WSTM system detects conflicts at word, not object, granularity. This
approach can avoid unnecessary conflicts if two transactions access different fields in an object,
but it complicates the implementation sufficiently that few STM systems adopted the
idea(although, HTM systems generally detect conflicts at word or cache line granularity).
WSTM systems also were the first STM system integrated into a programming language. Harris
and Fraser extended java with an atomic statement that execute d its block in a transaction, for
atomic {
int x=Lst.head;
The constructs also provided an optional guard that prevents a transaction from executing until
the condition becomes true.Page 18

Seminar Report
Transactional Memory
Department of Computer Science
Considerable research has investigated the policies that select which transaction to abort at a
conflict. No one policy performs best in all situations, though a policy called polka performed
well overall. Under this policy, each transaction tracks the number of objects it has open and
uses this count as its priority. A transaction attempts to acquire access to an object immediately
aborts a conflicting, lower priority transaction. If the acquiring transactionâ„¢s priority is lower, it
backs of N times, where N is the difference in priority, with an exponentially increasing interval
between the retries. The transaction aborts and re-executes if it cannot acquire an object within
N attempts.
Direct-update systems
In a direct update system, transactions directly modify an object rather than a copy.
Eliminating the copy potentially is more efficient, since it does note require a clone of each
modified object. However direct update system must record the original value of each modified
memory location, so the system can restore the location if the transaction aborts. In addition it
must prevent a transaction from reading the location modified by other, (uncommitted
transactions) there by reducing the potential for concurrent execution. It is not completely lock
It requires a lock to prevent multiple transactions from updating an object concurrently.
Because of the high cost of fair multiple reader-single writer locks, the systems do not lock a
raed only object and instead rely on read only object and instead rely on read set validation to
detect concurrent modification of read only objects. These lock sets incur the same high
overhead cost as in deferred update systems.Page 19

Seminar Report
Transactional Memory
Department of Computer Science
The locks used to prevent multiple writes to a location, however, raise the possibility of
stalling many transactions when a transaction is suspended or rescheduled while holding locks.
Deferred update STM systems typically use non blocking data structures, which prevented a
failed thread from obstructing other threads. Direct update STM systems provide similar
forward progress guarantees to an application by detecting and aborting failed or blocked
The programming effort necessary to exploit parallelism is justified if the new code performs
better or is more responsive than sequential code. Even though the performance of recent STM
systems scales with the number of processors in a multi core chip, the overhead of the software
systems is significant. Even with compiler optimization, a STM thread may run two to seven
times slower than sequential code.
HTM can improve the performance of STM. While still an active area of research, proposed
systems fall into two board categories; those that accelerate key STM operations and those that
implement transactional book keeping directly in hardware.Page 20

Seminar Report
Transactional Memory
Department of Computer Science
In addition to their performance benefits, TM greatly simplifies conceptual understanding of
multithreaded programs and helps make programs more maintainable by working in harmony with
existing high level abstractions such as objects and modules. Lock based programming have a
number of well known problems that frequently arise in practice:
¢ They require thinking about overlapping operations and partial operations in distantly
separated and seemingly unrelated sections of code, a task which is very difficult and error
prone for programmers.
¢ They require programmers to adopt a locking policy to prevent deadlock, livelock, and
other failures to make progress. Such policies are often informally enforced and fallible,
and when these issues arise they are insidiously difficult to reproduce and debug.
¢ They can lead to priority inversion, a phenomenon where a high priority thread is forced to
wait on a lower priority thread holding exclusive access to a resource that it needs.
In contrast, the concept of memory transaction is much simpler, because each transaction can
be viewed in isolation as a single threaded computation. Deadlock and livelock are either
prevented entirely or handled by an external transaction manger, the programmer need hardly
worry about it. Priority inversion can still be an issue, but high priority transactions can abort
conflicting lower priority transactions that have not already committed.Page 21

Seminar Report
Transactional Memory
Department of Computer Science
In 2005, Tim Harris,Simon Marlow,Simon peyton jones, and Maurice Herlihy described an
STM system built on concurrent Haskell that enables arbitrary atomic operations to be
composed into larger atomic operations, a useful concept impossible with lock based
The most fundamental objection is that lock based programs do not compose: correct
fragments may fail when combined. For example, consider a hash table with thread safe insert
and delete operations. Now suppose that we want to delete one item A from table t1, and
insert it into table t2; but the intermediate state (in which neither table contains the item) must
not be visible to other threads. Unless the implementer of hash table anticipates this need,
there is simply no way to satisfy this requirement. In short operations are individually correct
cannot be composed into larger correct operations.
With STM, this problem is simple to solve; simply wrapping two operations in a
transaction makes the combined operation atomic. The only sticking point is that itâ„¢s unclear
to the caller, who is unaware of the implementation details of the component methods, when
they should attempt to re-execute the transaction if it fails. In response, the authors proposed a
retry command which uses the transaction log generated by the failed transaction to
determine which memory cells it read, and automatically retries the transaction when one of
these cells is modified, based on the logic that the transaction will not behave directly until at
least one such value is changed.Page 22

Seminar Report
Transactional Memory
Department of Computer Science
The authors also proposed a mechanism for composition of alternatives, the orElse keyword.
It runs one transaction and, if that transaction does a retry, runs a second one. If both retry, it
tries them both again as soon as a relevant change is made. This facility, comparable to
features such as the POSIX networking select () call, allows the caller to wait on any one of a
number of events simultaneously. It also simplifies programming interfaces, for example by
providing a simple mechanism to convert between blocking and non blocking operations.Page 23

Seminar Report
Transactional Memory
Department of Computer Science
Transactions by themselves cannot replace all synchronization in a parallel program. Beyond
mutual exclusion, synchronization is often used to coordinate independent task, for example, by
ensuring that one task waits for an another to finish or by limiting the number of threads
performing a task.
Transactions by themselves provide little assistance in coordinating independent tasks. For
example, consider a producer consumer programming relationship, in which one task writes a
value that another task reads. Transactions can ensure the tasks shared accesses do not interfere.
However, this pattern is expensive to implement with transactions, whose goal is to shield a
task from interactions with other tasks. If the consumer transaction find the value is not
available, it can only abort and check for the value later. Busy waiting by aborting is inefficient
since an aborted transaction rolls back its entire computation. A better solution is for the
producer to signal the consumer when the value is ready. However, since a signal is not visible
in a transaction, many TM systems provide a guard that prevents a transaction from starting
execution until a predicate becomes true.
Haskell TM introduced the retry and orElse constructs as a way for a transaction to wait until
an event occurs and to sequence the execution of two transactions. Executing a retry statement
cause the surrounding transaction to abort. It does not re-execute until a location it previously
read changes value, which avoids the crudest form of busy waiting in which a transaction
repeatedly reads an unchanging value and aborts. The orElse constructs composes twoPage 24

Seminar Report
Transactional Memory
Department of Computer Science
transactions by starting the second one only if the first transaction fails to commit. This pattern
arises in situations as simple as checking for a value in a cache and re-computing if it necessary-
is difficult to express otherwise, since a transactionâ„¢s failure and re-execution is transparent to
other computation.
We still do not understand the trade-off and programming pragmatics of the TM
programming model. For example, the semantics of nested transactions is an era of active
debate. Suppose that code in a transaction O invokes a library routine, which starts its own
transaction I. should the transactions interact in any way, and if so, what are the implications for
the TM implementation and for programmers building modular software and libraries?
Consider when transaction I commits should its results be visible only to code in transaction O
(closed nesting) or also to other threads (open nesting)? If the latter, what happens if transaction
O aborts? Similarly, if transaction I abort, should it terminate transaction O as well, or should
the inner transaction be rolled back and restarted independently?
Finally, the performance of TM is not yet good enough for widespread use. Software TM
systems (STM) impose considerable overhead costs on code running in a transaction, which
saps the performance advantages of parallel computers. Hardware TM systems (HTM) can
lower the overhead, but they are only starting to become commercially available, and most
HTM systems fall back on software for large transactions. Better implementation techniques are
likely to improve both type of systems and are an area of active research.Page 25

Seminar Report
Transactional Memory
Department of Computer Science
Transactional memory by itself is unlikely to make multicore computers readily
programmable. Many other improvements to programming languages, tools, runtime systems
and computer architecture are also necessary. TM however, does provide a time tested model
for isolating concurrent computations from each other. This model raises the level of abstraction
for reasoning about concurrent tasks and helps avoid many insidious are still the subject of
active research. If these difficulties can be resolved in a timely fashion, TM will likely become
a central pillar of parallel programming.Page 26

Seminar Report
Transactional Memory
Department of Computer Science
1. McDonald, A.,Chung,J.,Brian,D.C., Minh,C.C., Chafi,H., Kozyrakis,C., and Olukotun,K.
Architectural semantics for practical transactional memory. In proceedings of the 33
International Symposium on Computer Architecture. ACM, 2006, 53-65.
2. James Larus , Christos kozyrakis, Transactional Memory. ACM,2008,80-94
3. en.wikipedia/wiki/Transactional_memory
4. en.wikipedia/wiki/Software_transactional_memory
5. en.wikipedia/wiki/Concurrency_control
Use Search at wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  REDTACTON A SEMINAR REPORT project girl 2 565 25-04-2016, 03:58 PM
Last Post: mkaasees
  Computer Memory Based on the Protein Bacteriorhodopsin seminar projects crazy 16 9,050 06-09-2015, 04:54 PM
Last Post: Larbaski
  seminar report on cyber terrorism pdf jaseelati 0 330 23-02-2015, 01:49 PM
Last Post: jaseelati
  seminar report on internet of things jaseelati 0 378 29-01-2015, 04:51 PM
Last Post: jaseelati
  nano ic engine seminar report jaseelati 0 321 21-01-2015, 01:43 PM
Last Post: jaseelati
  google glass seminar report pdf jaseelati 0 344 21-01-2015, 01:41 PM
Last Post: jaseelati
  rolltop laptop seminar report jaseelati 0 287 17-01-2015, 03:15 PM
Last Post: jaseelati
  bicmos technology seminar report jaseelati 0 335 09-01-2015, 02:58 PM
Last Post: jaseelati
  3d optical data storage technology seminar report jaseelati 0 424 06-01-2015, 04:47 PM
Last Post: jaseelati
  icloud seminar report jaseelati 0 254 05-01-2015, 03:28 PM
Last Post: jaseelati