Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
project topics
Active In SP

Posts: 2,492
Joined: Mar 2010
18-04-2010, 08:38 PM

.doc   E-MAIL SERVICE SYSTEM IN RHEL.doc (Size: 570 KB / Downloads: 173)

Presented By:
As in many MNCâ„¢s Microsoft Office Outlook is used to send emails, email management for DNS used in different organizations either same or different. It is responsible for sending the mail from either the employees or the clients to the main server which uses the working of (SMTP) Simple mail Transfer Protocol. The mail then reaches the server and checks whether has to be sent to an address of different DNS or the same. If different then it makes a use of protocols like POP3 or IMAP. All is managed by Microsoft Outlook in windows and for handling the similar situation in LINUX/UNIX operating systems with the help of a tool known as Email Service System.
Email Service system is a lightweight, completely command line based, SMTP email agent. If you have the need to send email from the command line, this tool is perfect. It was designed to be used in linux, but it is also quite useful in many other contexts. Code is written in Perl and is unique in that it requires no special modules. It has a very simple interface, making it very easy to use. It can work on linux OS as well as on a Microsoft OS but you may need to put a .pl extension on filename so Windows will know to associate it with perl. It can be used to send attachments as well as normal mails to multiple users through carbon copies or blind carbon copies as well and hence is easy to manage as well.
Like other mail systems mails are being saved in log files within the system. Sent and received mails will be in different files. When we create a user then automatically the sent item file will be created and when we receive mail then the inbox file created named as mbox. We can create as many users as we want and also send mail to number of users at the time. In our project and implimentation we are using keywords like ˜-t™ for to ,˜-u™ for subject , ˜-m™ for message body and ˜-a™ for attachment. And thus we can send mail using appropriate option in a successful manner. Also we can send mails to different DNS like yahoo if our DNS of our system is registered.
It is a perl program, and only needs to be copied to a directory in your path to make it accessible. Most likely the following steps will be sufficient:
a) Install linux operating system
b) Configure yum server.
c) Configure DNS with the help of yum repositiory.
d) Copy the sendEmail script to /usr/bin
cp sendEmail /usr/bin
e) Make sure its executable
chmod +x /usr/bin/sendEmail
f) Run this main project and implimentation file i.e sendEmail by command
sendEmail or /usr/bin/sendEmail
Syntax: sendEmail -f ADDRESS [options]
-f ADDRESS from (sender) email address
* At least one recipient required via -t, -cc, or -bcc
* Message body required via -m, STDIN, or -o message-file=FILE
-t ADDRESS [ADDR ...] to email address(es)
-u SUBJECT message subject
-m MESSAGE message body
-s SERVER[TongueORT] smtp mail relay, default is localhost:25
-a FILE [FILE ...] file attachment(s)
-cc ADDRESS [ADDR ...] cc email address(es)
-bcc ADDRESS [ADDR ...] bcc email address(es)
-xu USERNAME authentication user (for SMTP authentication)
-xp PASSWORD authentication password (for SMTP authentication)
-l LOGFILE log to the specified file
-v verbosity, use multiple times for greater effect
-q be quiet (no stdout output)
-o NAME=VALUE see extended help topic "misc" for details
--help TOPIC The following extended help topics are available:
addressing explain addressing and related options
message explain message body input and related options
misc explain -xu, -xp, and others
networking explain -s, etc
output explain logging and other output options
1. Running sendEmail without any arguments will produce a usage summary.
2. sendEmail is written in Perl, so no compilation is needed.
3. On a Unix/Linux OS if your perl binary is not installed at /usr/bin/perl
you may need to edit the first line of the script accordingly.
4. On a Microsoft OS you may need to put a .pl extension on sendEmail so
Windows will know to associate it with perl.
Simple Email:
sendEmail -f myaddress@isp.net \
-t myfriend@isp.net \
-s relay.isp.net \
-u "Test email" \
-m "Hi buddy, this is a test email."
Sending to mutiple people:
sendEmail -f myaddress@isp.net \
-t "Scott Thomas <scott@isp.net>" jason@isp.net renee@isp.net \
-s relay.isp.net \
-u "Test email" \
-m "Hi guys, this is a test email."
Sending to multiple people using cc and bcc recipients:
(notice the different way I specified multiple To recipients, you can do this for cc and bcc as well)
sendEmail “f myaddress@isp.net \
-t shikha@isp.net; vipin@isp.net; pardeeep@isp.net \
-cc monika@isp.net; kajal@isp.net; ritu@isp.net \
-bcc ekta@isp.net; heena@isp.net; jay@isp.net \
-s shikha.isp.net \
-u "Test email with cc and bcc recipients" \
-m "Hi guys, this is a test email."
Sending to multiple people with multiple attachments:
sendEmail -f myaddress@isp.net \
-t vipin@isp.net \
-cc jenn@isp.net; kitu@isp.net; deepika@isp.net \
-s vipin.isp.net \
-u "Test email with cc and bcc recipients" \
-m "Hi guys, this is a test email." \
-a /mnt/storage/document.sxw "/root/My Documents/Work Schedule.kwd"
Sending an email with the contents of a file as the message body:
cat /tmp/file.txt | sendEmail -f myaddress@isp.net \
-t heena@isp.net \
-s relay.isp.net \
-u "Test email with contents of file"
Sending an email with the contents of a file as the message body (method 2):
sendEmail -f myaddress@isp.net \
-t leena@isp.net \
-s relay.isp.net \
-o message-file=/tmp/file.txt \
-u "Test email with contents of file"
Sending an html email: (make sure your html file has <html> at the beginning)
cat /tmp/file.html | sendEmail -f myaddress@isp.net \
-t deepak@isp.net \
-s relay.isp.net \
-u "Test email with html content"
Linux is a fast, stable, and open source operating system for PC computers and workstations that features professional-level Internet services, extensive development tools, fully functional graphical user interfaces (GUIs), and a massive number of applications ranging from office suites to multimedia applications. Linux was developed in the early 1990s by Linus Torvalds, along with other programmers around the world. As an operating system, Linux performs many of the same functions as Unix, Macintosh, Windows, and Windows NT. However, Linux is distinguished by its power and flexibility, along with being freely available. Most PC operating systems, such as Windows, began their development within the confines of small, restricted personal computers, which have only recently become more versatile machines. Such operating systems are constantly being upgraded to keep up with the ever-changing capabilities of PC hardware. Linux, on the other hand, was developed in a different context. Linux is a PC version of the Unix operating system that has been used for decades on mainframes and minicomputers and is currently the system of choice for network servers and workstations. Linux brings the speed, efficiency, scalability, and flexibility of Unix to your PC, taking advantage of all the capabilities that personal computers can now provide
Red Hat Linux is currently the most popular Linux distribution. As a company, Red Hat provides software and services to implement and support professional and commercial Linux systems. Red Hat has split its Linux development into two lines, Red Hat Enterprise Linux and the Fedora Project. Red Hat Enterprise Linux features commercial enterprise products for servers and workstations, with controlled releases issued every two years or so. The Fedora Project is an Open Source initiative whose Fedora release will be issued every six months on average, incorporating the most recent development in Linux operating system features as well as supported applications. Red Hat freely distributes its Fedora version of Linux under the GNU General Public License; the company generates income by providing professional-level support, consulting services, and training services. The Red Hat Certified Engineers (RHCE) training and certification program is designed to provided reliable and highly capable administrators and developers to maintain and customize professional-level Red Hat systems. Red Hat has forged software alliances with major companies like Oracle, IBM, Dell, and Sun. Currently, Red Hat provides several commercial products, known as Red Hat Enterprise Linux. These include the Red Hat Enterprise Advanced Server for intensive enterprise-level tasks; Red Hat Enterprise ES, which is a version of Linux designed for small businesses and networks; and Red Hat Enterprise Workstation. Red Hat also maintains for its customers the Red Hat Network, which provides automatic updating of the operating system and software packages on your system. Specialized products include the Stronghold secure Web server, versions of Linux tailored for IBM- and Itanium-based servers, and GNUPro development tools (redhatsoftware/gnupro).
Red Hat also maintains a strong commitment to open source Linux applications. Red Hat originated the RPM package system used on several distributions, which automatically installs and removes software packages. Red Hat is also providing much of the software development for the GNOME desktop, and it is a strong supporter of KDE. Red Hat provides an extensive set of configuration tools designed to manage tasks such as adding users, starting servers, accessing remote directories, and configuring devices such as your monitor or printer. These tools are accessible on the System Settings and Server Settings menus and windows, as well as by their names, all beginning with the term system-config.
.Perl is an interpreted high-level programming language developed by Larry Wall
The two keywords to understand are .interpreted and .high-level. There has always been some arguments over whether to use the term .script. or .program. for Perl source codes. In general, a piece of code that is executed by hardware or a software interpreter, written in some kind of programming languages, is formally called a .program.. This is a general term that applies to programs written in machine instructions, or any programs that are compiled or interpreted. Because Perl is an open source software, which releases the source code to the public for free, you will see the source code distribution listed. Yet for usual programming purposes there is no need to download the source _les unless binary distributions are not available for your system.
Perl is the most popular scripting language used to write scripts that utilize the Common Gateway Interface (CGI), and this is how most of us got to know this language in the _rst place. A cursory look at the CGI Resource IndexWeb site provided me with a listing of about 3000 Perl CGI scripts, compared
with only 220 written in C/C++, as of this writing. There are quite many free Web hosts that allow you to deploy custom Perl CGI scripts, but in general C/C++ CGI scripts are virtually only allowed unless you pay.
An exception is if you are using one of the operating systems in the Unix family (including Linux). There are already compilation tools in these operating systems and you can manually compile Perl from sources and install it afterwards. However, note that compilation can be a very time-consuming process, depending on the performance of your system. If you are using Linux, binary distributions in the form of RPM or DEB packages can be installed very easily. Only if you cannot and a binary distribution for your platform that you are encouraged to install from source package

Yum is a tool for automating package maintenance for a network of workstations running any operating system that use the Red Hat Package Management (RPM) system for distributing packaged tools and applications. It is derived from yup, an automated package updater originally developed for Yellowdog Linux, hence its name: yum is "Yellowdog Updater, Modified".
Yum was originally written and maintained by Dan Burcaw, Bryan Stillwell, Stephen Edie, and Troy Bengegerdes of Yellowdog Linux (an RPM-based Linux distribution that runs on Apple Macintoshes of various generations). Yum was written and is currently being maintained by Seth Vidal and Michael Stenner, both of Duke University, although as an open source GPL project and implimentation many others have contributed code, ideas, and bug fixes (not to mention documentation :-). The yum link above acknowledges the (mostly) complete list of contributors, as does the AUTHORS file in distribution tarball.
Yum is a New Public License (GPL) tool; it is freely available and can be used, modified, or redistributed without any fee or royalty provided that the terms of its associated license are followed.
YUM is more scalable and tolerant than other Linux updating programs, such as Red Hat-based up2date and Debian-based APT-RPM (now managed by Conectiva), which makes it more suitable for enterprise environments.
YUM handles dependencies more gracefully than the others, supports multiple Repositories, groups and failover, and simplifies the management of multiple centralized And decentralized machines.
YUM, like up2date, is written in Python, while APT-RPM is written in C++; the Difference is 33,000 lines of code, meaning YUM and up2date are faster and less complex. On the other hand, up2date and APT-RPM have native GUIs, while YUM is command-line only (third-party GUIs are available). Also, up2date has a rollback feature absent in YUM, which is important in case of incorrect or incomplete updates. YUM may be used in other popular distributions such as Novell's SuSE Linux or Mandrake, but there are less likely to be issues with Red Hat and Fedora.
Yum (currently) consists of two tools; yum-arch, which is used to construct an (ftp or http) repository on a suitable server, and yum, the general-purpose client. Once a yum repository is prepared (a simple process detailed below) any client permitted to access the repository can install, update, or remove one or more rpm-based packages from the repository. Yum's "intelligence" in performing updates goes far beyond that of most related tools; yum has been used successfully on numerous occasions to perform a "running upgrade" of e.g. a Red Hat 7.1 system directly to 7.3 (where the probability of success naturally depends on how "customized" the target system is and how much critical configuration file formats have "drifted" between the initial and final revisions “ YMMV).
In addition, the yum client encapsulates various informational tools, and can list rpm's both installed and available for installation, extract and publish information from the rpm headers based on keywords or globs, find packages that provide particular files. Yum is therefore of great use to users of a workstation, either private or on a LAN; with yum they can look over the list of available packages to see if there is anything "interesting", search for packages that contain a particular tool or apply to a particular task, and more.
Yum is designed to be a client-pull tool, permitting package management to be "centralized" to the extent required to ensure security and interoperability even across a broad, decentralized administrative domain. No root privileges are required on yum clients -- yum requires at most anonymous access (restricted or unrestricted) from the clients to a repository server (often one that is maintained by a central -- and competent -- authority). This makes yum an especially attractive tool for providing "centralized" scalable administration of Linux systems in a decentralized network management network managers naturally occur (such as a University).
One of yum's most common uses in any LAN environment is to be run from a nightly cron script on each yum-maintained system to update every rpm package on the system safely to the latest versions available on the repository, including all security or operationally patched updates. If yum is itself installed from an rpm custom-preconfigured to perform this nightly update, an entire campus that installs its systems from a common repository base can achieve near complete consistency with respect to distribution, revision, and security. Security and other updates will typically appear on all net-connected clients no more than 24 hours after the an updated rpm is placed on the repository by its (trusted) administrator who requires no root-level privileges on any of the clients.
Consequently with yum a single trusted administrator can maintain a trusted rpm repository (set) for an entire University campus, an entire corporation, an entire government laboratory or institution. Alternatively, responsibility for different parts of a distribution can be split up safely between several trusted administrators on distinct repositories, or a local administrator can add a local trusted repository to overlay or augment the offerings of the campus level repositories. All systems at a common revision level will be consistent and interoperable to the extent that their installed packages (plus any overlays by local administrators) allow. Yum is hence an amazingly powerful tool for creating a customized repository-based package delivery and maintenance system that can scale the work of a single individual to cover thousands of machines.
To understand how yum works it helps to define a few terms:
¢ Server: The term server generally refers to the physical system that provides access one or more ways to a repository. However when yum was first developed there was generally only one server per repository and server was often used as more or less of a synonym for repository. We will use it below only in the former sense -- as a reference to a particular web, ftp, nfs server that provides access to a repository, not as the repository itself.
¢ Repository: A repository is a collection of rpms under some sort of file system tree. For most purposes associated with yum, the repository will have two more important characteristics. It has had the command yum-arch run on the tree, creating a "headers" directory containing header and path information to all the rpm's under the tree, and is accessible by URL (which means as one or more of my.web.server.ext/path, http://ftp.my.ftp.server.ext/path, file://full/file/path to the repository tree). Repository: A repository is a collection of rpms under some sort of file system tree. For most purposes associated with yum, the repository will have two more important characteristics. It has had the command yum-arch run on the tree, creating a "headers" directory containing header and path information to all the rpm's under the tree, and is accessible by URL (which means as one or more of my.web.server.ext/path, http://ftp.my.ftp.server.ext/path, file://full/file/path to the repository tree).
¢ Server id: As noted above, there used to be a more or less one to one correspondence between the servers and repositories in early versions of yum. However, this correspondence is not many too many. A single repository can be mirrored on many servers, and a single server can hold many repositories. When organizing "robust" access to repositories (which means providing URL's to the same repository on fallback servers in case the primary server of a repository is down) it is now necessary to label the repository with some sort of unique id that obviously cannot be the server or repository alone. The serve rid is thus the unique label used in yum.conf to indicate that all the repositories given under a single baseurl are (presumably) mirrors of one another.
¢ RPM : This is stands for "Red Hat Package Manager", the toolset developed by Red Hat Linux for distributing and maintaining "packages" of tools, libraries, binaries, and data for their Linux distribution. It is fully open source and is currently the basis for many Linux distributions other than Red Hat. When the documentation below speaks of "an rpm" it refers to a single package, usually named packagename-version.rpm. To understand how yum functions, it is necessary to understand a bit about the structuring of rpm's. An rpm consists of basically three parts: a header, a signature, and the (generally compressed) archive itself. The header contains a complete file list, a description of the package, a list of the features and libraries it provides, a list of tools it requires (from other packages) in order to function, what (known) other packages it conflicts with, and more. The basic rpm tool needs information in the header to permit a package to be installed (or uninstalled!) in such a way that Installing the package breaks none of the already installed packages (recursively, as they may need packages of their own to be installed). All the packages that the package requires for correct operation are also (or already) installed along with the selected package, recursively. A later version of the package does not (accidentally) replace an earlier version of the package.
¢ This process is generically known as "Resolving Package Dependencies" and it is one of the most difficult part of package management. It is quite possible to want to install a packaged tool that require two or three libraries and a tool. The libraries in turn may require other libraries, the tool other tools. By the time you're done, installing the package may require that you install six or eight other packages, none of which are permitted to conflict or break any of the packages that are already there or will remain behind.
¢ If you have ever attempted to manage rpm's by hand , you know that tracking down all of the rpm™s headers and dependencies and resolving all conflicts is not easy and that it actually becomes more difficult in time as a system manager updates this on one system, that on another, rebuilds a package here, installs something locally into /usr/local there.
¢ Eventually (sometimes out of sheer frustration) an rpm is -- force installed , and there after the rpm database itself on the system itself is basically inconsistent and any rpm install is likely to fail and require --force-ing in turn. Entropy creeps into the network, and with it security risks and disfunction.
¢ Yet not updating packages is also a losing situation. If you leave a distribution based install untouch it remains clean. However, parts of it were likely broken at the time of install -- there are always bugs even in the most careful of major distributions. Some of those bugs are security bugs, and as crackers discover them and exploits are developed it rapidly becomes a case of "patch your system or lay out the welcome mat for vermin".
¢ This is a global problem with all operating systems; even Windows-based systems (notorious for the vulnerability to viruses and crackers) can be made reasonably secure if they are rigorously kept up to date. Finally, users come along and demand THIS package or THAT package which are crucial to their work -- but not in the original, clean, consistent installation.
¢ In balance, any professional LAN manager has little choice,they must have some sort of mechanism for updating the packages already installed on their system(s) to the latest, patched, secure, debugged versions and for adding more packages, including ones that may not have been in the distribution they relied upon for their original base install. The only questions are: what mechanism should they use and what will it cost them (in time, hassle, learning curve, and reliability as well as in money). Let us consider the problem:
¢ In a typical repository,there are a lot of packages with a lot of headers. Actually 700 packages are of installed on the system I'm currently working on. However, the archive component of each package, which contains the actual binaries and libraries and documentation installed, is much larger --the complete rpm is thus generally two to four orders of magnitude larger than the header. For example, the header for Open Office (a fairly large package) total about 100 kilobytes in size. The rpm itself, on the other hand, is about 30 megabytes in size. The header can be reliably delivered in a tiny fraction of a second over most networks; the rpm itself requires seconds to be delivered over 100BT, and minutes to be delivered over e.g. DSL, cable, or any relatively slow network. One occupies the physical server of a repository for a tiny interval; the other creates a meaningful, sustained load on the server. All of these are important considerations when designing or selecting an update mechanism intended to scale to perhaps thousands of clients and several distinct repositories per physical server.
Early automated update tools either required a locally mounted repository directory in order to be able to access all of the headers quickly (local disk access even from a relatively slow CD-ROM drive, being fast enough to deliver the rpm's in a timely way so that their headers could be extracted and parsed) or required that each linked rpm be sent in its entirety over a network to an updating client from the repository just so it could read the header. One was locally fast but required a large commitment of local disk resources (in addition to creating a new problem that of keeping all the local copies of a master repository synchronized). The other was very slow. Both were also network resource intensive.
This is the fundamental problem that yum solves for you. Yum splits off the headers on the repository side (which is the job of its only repository-side tool, yum-arch). The headers themselves are thus available to be downloaded separately, and quickly, to the yum client, where they are typically cached semi-permanently in a small footprint in /var/cache/yum/serverid (recall that serverid is a label for a single repository that might be mirrored on several servers and available on a fallback basis from several URL's). Yum clients also cache (space permitting or according to the requirements and invocation schema selected by the system's administrator) rpm's when they are downloaded for an actual install or update, giving a yum client the best of both the options above -- a local disk image of (just the relevant part of) the repository that is automatically and transparently managed and rapid access to just the headers.
An actual download of all the headers associated with packages found on your system occurs the first time a yum client is invoked and thereafter it adds to or updates the cached headers (and downloads and caches the required rpm's) only if the repository has more recent versions or if the user has deliberately invoke yum's "clean" command to empty all its caches. All of yum's dependency resolution then proceeds from these cached header files, and if for any reason the install or update requires an rpm already in the cache to be reinstalled, it is immediately available.
As a parenthetical note, the author has used yum's caches in a trick to create a "virtual" update repository on his homogeneous, DSL-connected home LAN. By NFS exporting and mounting (rw,no_root_squash) /var/cache/yum to all the LAN clients, once normal updates have caused a header or rpm to be retrieved for any local host, they are available to all the local hosts over a (much faster than DSL) 100BT NFS mount. This saves tremendously on bandwidth and (campus) server load, using instead the undersubscribed server capacity of a tiny but powerful LAN. Best of all, there "is no setup"; what I just described is the works. A single export and a mount on all the clients and yum itself transparently does all of the work.
However, it is probably better in many cases to use rsync or other tools to provide a faithful mirror of the repository in question and use yum's fallback capability to accomplish the same thing (single use of a limited DSL channel) by design. This gives one a much better capability of standing alone should update access go away on the "server" of the yum cache NFS exported across a LAN.
With the header information (only) handy on high-speed local media, the standard tools used to maintain rpm's are invoked by yum and can quickly proceed to resolve all dependencies, determine if it is safe to proceed, what additional packages need to be installed, and so forth. Note well that yum is designed (by highly experienced systems administrator, Seth Vidal, with the help of all the other highly experienced systems administrators on the yum list) to be safe. It will generally not proceed if it encounters a dependency loop, a package conflict, or a revision number conflict.
If yum finds that everything is good and the package can be safely installed, removed, or updated, it can either be invoked in such a way that it does so automatically with no further prompts so it can run automatically from cron, or (the general default when invoked from a command line) it can issue a user a single prompt indicating what it is about to do and requesting permission to proceed. If it finds that the requested action is in fact not safe, it will exit with as informative an error message as it can generate, permitting the system's administrator to attempt to resolve the situation by hand before proceeding (which may, for example, involve removing certain conflicting packages from the client system or fixing the repository itself).
From the overview given above, it should be apparent that yum is potentially a powerful tool indeed, using a single clever idea (the splitting off of the rpm headers) to achieve a singular degree of efficiency. One can immediately imagine all sorts of ways to exploit the information now so readily available to a client and wrap them all up in a single interface to eliminate the incredibly arcane and complex commands otherwise required to learn anything about the installed package base on a system and what is still available. The yum developers have been doing just that on the yum list - dreaming up features and literally overnight implementing the most attractive ones in new code. At this point yum is very nearly the last thing you'll ever need to manage packages on any rpm based system once it has gotten past its original, distribution vendor based, install. Indeed, it is now so powerful that it risks losing some of its appealing simplicity. This description is intended to document yum's capabilities so even a novice can learn to use it client-side effectively in a very short time, and so that LAN administrators can have guidance in the necessarily more complex tasks associated with building and maintaining the repositories from which the yum clients retrieve headers and rpm's.
Yum's development is far from over. Volunteers are working on a GUI (to encapsulate many of yum's features for tty-averse users). Some of yum's functionality may be split off so that instead of a single client command there are two, or perhaps three (each with a simpler set of subcommand options and a clear differentiation of functionality). The idea of making yum's configuration file XML (to facilitate GUI maintenance and extensibility) is being kicked around. And of course, new features are constantly being requested and discussed and implemented or rejected. Individuals with dreams of their own (and some mad python or other programming skills:-) are invited to join the yum list and participate in the grand process of open source development.
Because yum invokes the same tools and python bindings used by e.g. Red Hat to actually resolve dependencies and perform installations (functioning as basically a super smart shell for rpm and anaconda that can run directly from the local header cache) it has proven remarkably robust over several changes to the rpm toolset that have occurred since its inception, some of them fairly major. It is at least difficult for yum to "break" without Red Hat's own rpm installation toolset breaking as well, and after each recent major change yum has functioned again after a very brief period of tune-up.
It is important to emphasize, however, that yum is not a tool for administering Red Hat (only) repositories. Red Hat will be prominently mentioned in this HOWTO largely because we (Duke) currently use a Red Hat base for our campus wide Linux distribution, maintain a primary (yum-enabled) Red Hat mirror, and are literally down the road a few miles from Red Hat itself. Still, if anything, yum is in (a friendly, open source) competition with Red Hat's own up2date mechanism and related mechanisms utilized by other distribution vendors.
So Note Well: Yum itself is designed for, and has been successfully used to support, rpm repositories of any operating system or distribution that relies on rpm's for package management and contains or can be augmented with the requisite rpm- python tools. Yum has been tested on or is in production on just about all the major rpm-based linuxes, as well as at least one Solaris repository. Its direct conceptual predecessor (with which it shares many design features and ideas, although very little remaining actual code) is Yellowdog Linux's updater tool yup, which had nothing whatsoever to do with Red Hat per se. Yum truly is free like the air, and distrbution-diagnostic by deliberate design.
A moment or two of meditation upon dependency resolution should suffice to convince one that Great Evil is possible in a large rpm repository. You have hundreds, perhaps thousands of rpm packages. Some are commercial, some are from some major distribution(s), others are local homebrew. What if, in all of these packages built at different times and by different people, you ever find that there exist rpm's such that (e.g.) rpm A requires rpm B, which conflicts with rpm C
(already installed)? What if rpm A requires rpm B (revision 1.1.1) but rpm B (revision 1.2.1) is already installed and is required in that revision by rpm C (also already installed)? It is entirely possible to assemble an "rpm repository from hell" such that nearly any attempt to install a package will break something or require something that breaks something. (As yet another parenthetical note, this was the thing that made many rpm-based distribution users look at Debian with a certain degree of longing. Apt untangles all of this for you and works entirely transparently from a single distribution "guaranteed to be consistent", and provides some lovely tools (some of which are functionally cloned in yum) for package management and dependency resolution. However, as is made clear on the yum site, yum is a better solution in many ways than apt or, for that matter, Current or up2date. I believe that the designers are working fairly aggressively to make sure it stays that way.)
A cynical (but correct) person would note that this was why rpmfind and other rpm "supertools" ultimately failed. Yes, rpmfind could locate any rpm on the planet in its super repository a matter of a few seconds, BUT (big but) resolving dependencies was just about impossible. If one was lucky, installing an e.g. Mandrake rpm on a Red Hat system that used SuSE libraries rpm's would work. Sometimes one required luck to install the Red Hat rpm's it would find on a Red Hat system, as they were old or built with non-updated libraries. Sometimes things would "kind of work". Other times installing an rpm would break things like all hell, more or less irreversibly. Untangling and avoiding this mess is what earns the major (rpm-based or not) Linux distribution providers their money. They provide an entire set of rpm's (or other packages) "all at once" that are guaranteed to be consistent in the distribution snapshot on the CD's or ISO images or primary website. All rpm's required by any rpm in the set are in the set. No rpm's in the provided set conflict with other rpm's in the set. Consequently any rpm in the set can be selected to be installed on any system built from the distribution with the confidence that, once all the rpm dependencies are resolved, the rpm (along with its missing dependencies) can be successfully installed. The set provided is at least approximately complete, so that one supposedly has little incentive or need to install packages not already in the distribution (except where so doing requires the customer to "buy" a more expensive distribution from the vendor)
In the real world this ideal of consistency and completeness is basically never achieved. All the distributions I've ever tried or know about have bugs, often aren't totally consistent, and certainly are not complete. A "good" distribution can serve as a base for a repository and support e.g. network installs as well as disk or CD local installs, but one must be able to add, delete, update packages new and old to the repository and distribute them to all the systems that rely on the repository for update management both automatically and on demand.
Alas, rpm itself is a terrible tool to use for this purpose, a fact that has driven managers of rpm-based systems to regularly tear their hair for years now. Using rpm directly to manage rpm installs, the most one can do is look one step ahead to try to resolve dependencies. Since dependency loops are not at all uncommon on real-world repositories where things are added and taken away (and far from unknown even in box- et Linux distributions that are supposed to be dependency-loop free) one can literally chase rpm's around in loops or up a tree trying to figure out what has to be installed before finally succeeding in installing the one lonely application you selected originally. rpm doesn't permit one to tell it to "install package X and anything else that it needs, after you figure out what that might be". Yum, of course, does.
Even yum, though, can't "fix" a dependency loop, or cope with all the arcane revision numbering schemes or dependency specifications that appear in all the rpm's one might find and rebuild or develop locally for inclusion in a central repository. When one is encountered, a Real Human has to apply a considerable amount of systems expertise to resolve the problem. This suggests that building rpm's from sources in such a way that they "play nice" in a distribution repository, while a critical component of said repository, is not a trivial process. So much so that many rpm developers simply do not succeed.
Also, yum achieves its greatest degree of scalability and efficiency if only rpm-based installation is permitted on all the systems using yum to keep up to date. Installing locally built software into /usr/local becomes Evil and must be prohibited as impossible to keep up to date and maintained. Commercial packages have to have their cute (but often dumb) installation mechanisms circumvented and be repackaged into some sort of rpm for controlled distribution.
Consequently, repository maintainers must willy-nilly become rpm builders to at least some extent. If SuSE releases a lovely new tool in source rpm form that isn't in your current Red Hat based repository, of course you would like to rebuild it and add it. If your University has a site license for e.g. Mathematics and you would like to install it via the (properly secured and license controlling) repository you will need to turn it into an rpm. If nothing else, you'll need to repackage yum itself for client installations so that its configuration files point to your repositories and not the default repositories provided in the installation rpm's /etc/yum.conf..
For all of these reasons an entire section of this HOWTO is devoted to a guide for repository maintainers and rpm builders, including some practices which (if followed) would make dependency and revision numbering problems far less common and life consequently good. In the next few sections we will see where to get yum, how to install it on the server side, and then how to set up and test a yum client. Following that there will be a few sections on advanced topics and design issues; how to set up a repository in a complex environment, how to build rpm's that are relatively unlikely to create dependency and revision problems in a joint repository, how to package third party (e.g. site licensed) software so it can be distributed, updated, and maintained via yum (Linux software distributors take note!) and more.
Before proceeding further, we need to have yum itself handy, specifically the yum-arch command and its current documentation. If you are working from the rpm's, you've probably already installed them on your repository (I mean actually installed the program, not necessarily inserted the rpm's into a server on the repository) and one or two test clients. If not, please do, and skip ahead to the sections on installing yum or setting up a server with yum-arch and creating a suitable /etc/yum.conf.
However, if you get the sources via tarball or from the CVS repository, you will have to locally build yum. If you plan to repackage it (basically required if you are setting up a repository ) so that yum clients automatically use the yum-based repositories you set up in their /etc/yum.conf, you will need the tarball (yum-*.tgz) anyway. The steps required to transform the provided tarball into an rpm are given below.
Note that many of these steps are not yet fully documented in the source README or INSTALL files and are a major reason a HOWTO is sorely needed by the project and implimentation. Most of yum's current systems manager users are already sufficiently expert to be able to build rpm's without additional instructions, but of course many who would like to use yum are not, and in any event it never hurts to document a moderately complicated process even for the experts.
Experts can also disagree. The steps below are ONE way of proceeding, but there are many others. Some managers will be working in monolithic (top down) management models where they have root control of all clients and will prefer to push /etc/yum.conf out to the clients directly, not de facto pull it onto clients during an install from a repository where it is available, preconfigured for the site in rpm form. Tools exist to make this simple enough (cfengine, rsync, more). Different people also have different ways of building rpms. Some always proceed as root, for example, using /usr/src/redhat (which exist for that purpose, after all).
However, in my own mind working as root is something to be avoided as much as possible because of the risk of unintended consequences when one makes a mistake. Some of you reading this article may be very uncomfortable working as root for this very reason.
In this article we are going to discuss how we can configure Yum for DVD sources in RHEL5.
Yum is the package management tool used these days. It has replaced the old "up2date" command which use to come with RHEL4. This command used to get updates from the RHN Red Hat Network) for the installed operating system, if the user using that command had bought a support/update entitlement from Red Hat. But with the new version of Red Hat and then it's free clone Centos5 "up2date" has been dropped out and instead of it "yum" as been included. "yum" was there in Fedora core for a long time and was use to update packages via 3rd party repositories. It started becoming mature with Fedora and now finally when Red Hat thought it to be matured enough to make it's way into RHEL it's here.
The major problem one face with Yum is to configure it for DVD/CD sources. Yum by default doesn't come enabled for these sources and we need to explicitly enable it. I don't know what is the reason behind not enabling Yum for these sources by default but, whatever it is we can still hack "yum" on our own and can configure it to use DVD/CD install sources.
Before starting I would like to mention that I am using a DVD source in this article which is represented by "/dev/dvd" and mounted on "/media/cdrom". The steps I tell here can be easily extended for CD sources as well. Later in this article I will tell how we can configure a local yum repository and use it for package management in our LAN clients.
First of all you have to put in the media CD/DVD into your CD/DVD ROM/Writer. Then you need to mount it manually if you are login via root user in a GUI. To do so mount /dev/dvd /media/cdrom. After mounting the DVD we need to copy the content of the DVD onto a directory. For example I have a directory /dvd/rhel5/. I will copy the whole contents
of /media/cdrom into /rhel5.
cp -r /media/cdrom/* /dvd/rhel5/
After copying the contents it's time to do some modifications. First of all we need to bring the xml files defining the groups to directory one level higher.
mv /dvd/rhel5/Server/repodata/comps-rhel5-server-ore.xml /dvd/rhel5/
mv /dvd/rhel5/VT/repodata/comps-rhel5-vt.xml /dvd/rhel5/
mv /dvd/rhel5/Cluster/repodata/comps-rhel5-cluster.xml /dvd/rhel5/ mv /dvd/rhel5/ClusterStorage/repodata/comps-rhel5-cluster.xml /dvd/rhel5/
Now we need to delete the repodata/ directories which come with the default install tree. The reason behind this is that in their xml files we have a string
<location> xml:base="media://1170972069.396645#1" ..... </location>
This string is present in repmod.xml as well as primary.xml.gz. This thing creates problem with using DVD/CD sources with yum. So we need to do the following
rm -rf /dvd/rhel5/Server/repodata rm -rf /dvd/rhel5/VT/repodata rm -rf /dvd/rhel5/Cluster/repodata rm -rf /dvd/rhel5/ClusterStorage/repodata
After we have deleted the default repodata/ directories it's time to re-create them using the "createrepo" command. Now this command doesn't comes by default I guess so we need to install it's rpm
rpm -ivh /dvd/rhel5/Server/createrepo-0.4.4-2.fc6.noarch.rpm
Next step is to run this command. Before running this command we need to switch to the /dvd/ directory. Then run the commands listed below
createrepo -g comps-rhel5-server-core.xml dvd/Server/ createrepo -g comps-rhel5-vt.xml dvd/VT/ createrepo -g comps-rhel5-cluster.xml dvd/Cluster/createrepo -g comps-rhel5-cluster-st.xml dvd/ClusterStorage/
The above commands will do most part of the job. Now it's time to configure the /etc/yum.conf for our local repository. Note that we can also create separate repo files in /etc/yum.repos.d/ directory but I have tried it without any luck. So do the following
vi /etc/yum.conf
In this file type in the following:
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d [Server]
name=Server baseurl=file:///dvd/rhel5/Server/ enabled=1 [VT] name=Virtualization baseurl=file:///dvd/rhel5/VT/
name=Cluster Storage
We can also use GPG key signing. For that write on top of the above lines
gpgkey=file:///dvd/rhel5/RPM-GPG-KEY-fedora file:///dvd/rhel5/RPM-GPG-KEY-
fedora-test file:///dvd/rhel5/RPM-GPG-KEY-redhat-auxiliary file:///dvd/rhel5/RPM-
PG-EY-redhat-beta file:///dvd/rhel5/RPM-GPG-KEY-redhat-former
This will be sufficient for now. Let's create the yum cache now.
yum clean all
yum update
It's all done now. We can now use "yum" command to install/remove/query packages and now yum will be using the local yum repository. Well I am mentioning some of the basic "yum" commands which will do the job for you for more options to the "yum" command see the man page of "yum".
yum install package_name
Description: Installs the given package
yum list
Description: List's all available package in the yum database
yum search package_name
Description: Search for a particular package in the database and if found print's a brief info about it.
yum remove package_name
Description: Remove's a package.
Now we will mention the steps you can use to extend this local repository to become a local http based repository so that LAN clients can use it for package management. I will be using Apache to configure this repository as it's the best available software for this job.
Do configure the repository for http access via LAN clients we need to make it available
to them. For that I am declaring a virtual host entry in apache's configuration file. This is
how it looks for us
ServerAdmin webmaster@server.example.com
ServerName server.example.com
DocumentRoot "/dvd/rhel5/"
ErrorLog logs/server.example.com-error_log
CustomLog logs/server.example.com-access_log common
After this
service httpd start
chkconfig httpd on
Now it's time to make a yum.conf file that we will use at the client end. I am writing my
yum.conf for clients. You can use it and modify according to your setup.
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d
name=Cluster Storage
Copy this file to /etc/ directory of the client end and replace it with the original file there.
After copying is done it's time to do this
yum clean all
yum update
rpm --import /etc/pki/rpm-gpg/*
Now you can use yum on the client end to just install any package and it will
communicate with the local repo server to get the package for you. You can also use pirut
in the same way to get things done.
So this is how we can configure Yum for RHEL5 Server and can also use it to create our
own local repo server for the LAN.
3.1.1 REQUIREMENTS (RPM Required) :
bind, caching
¢ #rpm “qa | grep bind
bind, bind-utils, bind-chroot, bind-libs, ypbind, kdebindings
¢ #rpm “qa | grep caching
Note : If caching or bind not installed then install it first. Configure YUM and install it.
¢ #yum install cash*
1. # cd /var/named/chroot/etc
2. # cp named.caching<press tab> named.conf
3. # cat named.rf<press tab> >>named.conf
4. #vi named.conf
a. At around Line No :15(Listen-on Port 53), Change default IP( to IP of your System(DNS Server :, leave the port number as it is(53).
b. At around line number -23, change(allow-querry) Ex : (For allowing entire network, so all system of this network can query this DNS Server)
5. Comment line from 31 to 36 that is block of view localhost_resolver, default it is for local host.
6. Copy zone localhost IN block (usually from line 57 to 61) and paste it in last of the file then modify it in pasted block.
a. Modify :
i. zone example.com -> It could be domain name that u want.
ii. type master; -->Specifying master DNS Server.
iii. file f.zone; --> Database file which contains name IP mapping(Forward zone, name can be anything but extension would be .zone
iv. keep only two lines in the block( type master and file f.zone-? database file which contains name and IP mapping, it is created later steps. Donâ„¢t be hurry have some patience guy.)
v. Remove other lines from this block.
7. Copy line from 63 to 67 that is, zone0.0.127.in-addr.arpa IN block & paste in last of the file. Now modify the pasted block.
a. Zone 0.168,192.in-addr.arpa IN{type master file r.zone}
b. Keep these two lines. Remove others lines.
8. save & quit
9. Check the configuration settings.
a. #named-checkconf named.conf
b. If no message comes, it means settings every are ok.
10. You can create a master or slave DNS Server. Slave doesnâ„¢t have itâ„¢s own database, it updates itâ„¢s database from master after some time period.
a. f.zone-? forward zone
b. r.zone ? reverse zone.
11. # cd /var/named/chroot/var/named
12. # cp localhost.zone f.zone
13. #vi f.zone
14. At Line number 1: $TTL 86400 specifies times in second, after this many seconds slave will update itâ„¢s database from master server.
15. Line No 2 : Represents the domain name, so instead of @ ou can give domain name such as (example.com as mentioned in steps 6.1).
16. Modify this line :
a. IN SOA server1.example.com. (SOA : Service of Authority. Note : Dot at last is must)
17. IN NS server1.example.com. (Note : Dot is compulsory)
Note : Last dot specify the fully qualified domain name. If you donâ„¢t add it, it will automatically add example.com again which will not be resolve.
18. Delete IN AAAA line, this is useful for IPV6 Ip Scheme.
19. Give some entry of the host of your network. In line 10,11,12 respectively.
a. Server1.example.com. IN A
b. Server2.example.com. IN A
c. Server3.example.com. IN A
d. Save this file and quit.
20. # cp f.zone r.zone
21. Delete last three lines & give the following entry.
a. 1 IN PTR server1.example.com.
b. 2 IN PTR server2.example.com.
c. 3 IN PTR server3.example.com.
d. Note : 1,2,3 are nothing but last digit of IP Address(You can also give instead of 1.
e. Save this file and quit.
22. #chgrp named f.zone
23. #chgrp named r.zone
24. #chgrp named /var/named/chroot/etc/named.conf
25. #chgrp named restart
26. Now, Server setting is done.
1. #vi /etc/resolv.conf
a. Delete entire things & give this entry
b. nameserver (This is the address of DNS Server, which client will look for. So for all client machine you can give the same entry).
2. # service named restart (Although at client side it is not required but give)
3. Now check from client machine whether DNS Server is resolving the name or not.
a. # host server1.example.com (It should give IP address).
b. # host ( It should give name)
Figure 4.1 Yum Configured

Figure 4.2 DNS Configured
Figure 4.3 Prompt Occured
E-mail has been successfully send to a different user within the same DNS but a prompt occurred as -m option was not given for message typing.
Figure 4.4 E-mail sent
E-mail was sent successfully without any error
Figure 4.5 GUI Mode
E-mail service system run on GUI( GRAPHICAL USER INTERFACE) mode of the operating system.
Figure 4.6 Client side
E-Mail received at client side
In the end we can conclude that due to our project and implimentation Email Service System the task of sending mail or transfering data through mail will be quite easy. A process which is hard to do in linux will be easy. In linux if we want to send mail then it is little hard to do this work through command, firstly you should learn all the commands and then use it. But in our project and implimentation when we run it firstly all options are available and you just see the option and send mail easily. If we give a wrong option then it will give you proper warning, so we can correct ourself. So we can say that it is quite feasible to the company if they are use linux as operating system and also want a proper mail system like Microsoft outlook will be used in windows.
It is reliable as a copy of each and every mail whether sent or received is being saved. It is also secure as only existing users can receive mails and also reply back. Also user passwords are given for SMTP authentication therefore its safe enough.
On the basis of its feasibility study we are able to conclude the following sub-points:
ECONOMIC FEASIBILITY: It is economically feasilble.
TECHNICAL FEASIBILITY: It is also technically feasible , because it requires less storage space, less of hardware device and less processing power.
SOCIAL FEASIBILITY: It is a simple tool and easily executable so everyone can use it.
In future we will try to convert it in completely graphical mode. All the options like send, reply, forward, attachment etc will be in graphical mode. Buttons would be available for users to just press and the work which would be easy to work for any laymen user who is not a professional user. There will be no need to remember commands like ˜“t™ used for ˜to™ option or ˜-u™ for ˜subject™ to receiver. You can use it like you are using your general mail systems like yahoo or gmail. And it will make your work completely effortless.
1. redhat.com
2. linuxtutorials.info
3. Linux Complete Reference 5th Edition.
4. google.com
4.1 nixcraft.com
5. Perl 5 Tutorial 1st Edition
By : Chan Bernard Ki Hong
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Brain Chips full report seminar class 2 3,316 13-07-2016, 03:38 PM
Last Post: jaseela123
  Intranet Mailing System full report seminar tips 4 1,648 06-05-2016, 11:25 AM
Last Post: mkaasees
  WCPS Web Based Claims Processing System A PROJECT REPORT study tips 1 495 05-04-2016, 12:33 PM
Last Post: mkaasees
  WEB BASED CLAIM PROCESSING SYSTEM REPORT study tips 1 677 05-04-2016, 12:32 PM
Last Post: mkaasees
  Report on Online grievance redressal system study tips 1 652 04-04-2016, 03:53 PM
Last Post: mkaasees
  ABSTRACT ON REPORT HOSPITAL MANAGEMENT SYSTEM study tips 1 588 01-04-2016, 11:53 AM
Last Post: mkaasees
  chat server full report project report tiger 10 16,577 06-11-2015, 07:20 PM
Last Post: Guest
  email flyer design full report project report tiger 9 27,937 26-02-2015, 03:06 PM
Last Post: oiajecuot
Brick V3 MAIL SERVER full report project report tiger 4 6,508 04-10-2014, 02:39 PM
Last Post: GaCcBuH
  data mining full report project report tiger 35 199,297 03-10-2014, 04:30 AM
Last Post: kwfEXGu