Transaction Processing System
From Wikipedia, the free encyclopedia
A Transaction Processing System (TPS) is a type of information system. TPSs collect, store, modify, and retrieve the transactions of an organization. A transaction is an event that generates or modifies data that is eventually stored in an information system. To be considered a transaction processing system the computer must pass the ACID test.
Contents |
[edit] Types of Transaction Processing Systems
[edit] Batch Processing
Batch processing is one bill for a month's worth of purchases.
[edit] Features of Transaction Processing Systems
[edit] Rapid Response
Fast performance with a rapid response time is critical. Businesses cannot afford to have customers waiting for a TPS to respond, the turnaround time from the input of the transaction to the production for the output must be a few seconds or less.
[edit] Reliability
Many organisations rely heavily on their TPS; a breakdown will disrupt operations or even stop the business. For a TPS to be effective its failure rate must be very low. If a TPS does fail, then quick and accurate recovery must be possible. This makes well–designed backup and recovery procedures essential.
[edit] Inflexibility
A TPS wants every transaction to be processed in the same way regardless of the user, the customer or the time for day. If a TPS were flexible, there would be too many opportunities for non-standard operations, for example, a commercial airline needs to consistently accept airline reservations from a range of travel agents, accepting different transactions data from different travel agents would be a problem.
[edit] Controlled processing
The processing in a TPS must support an organisation's operations. For example if an organisation allocates roles and responsibilities to particular employees, then the TPS should enforce and maintain this requirement.
[edit] ACID Test Properties: First Definition
[edit] Atomicity
A transaction’s changes to the state are atomic: either all happen or none happen. These changes include database changes, messages, and actions on transducers.
[edit] Consistency
A transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. This requires that the transaction be a correct program.[2]
[edit] Isolation
Even though transactions execute concurrently, it appears to each transaction T, that others executed either before T or after T, but not both.[2]
[edit] Durability
Once a transaction completes successfully (commits), its changes to the state survive failures.[2]
[edit] Storing and Retrieving
Storing and retrieving information from a TPS must be efficient and effective. The data is stored in warehouses or other databases, the system must be well designed for its backup and recovery procedures.
[edit] Databases and files
The storage and retrieval of data must accurate as it is used many times throughout the day. A database is a collection of data neatly organised, which stores the accounting and operational records in the database. Databases are always protective of its delicate data, so it usually has a restricted view of certain data. Databases are designed using hierarchical, network or relational structures; each structure is effective in its own sense.
- Hierarchical structure: organises data in a series of levels, hence why it is called hierarchal. Its top to bottom like structure consists of nodes and branches; each child node has branches and is only linked to one higher level parent node.
- Network structure: Similar to hierarchical, network structures also organises data using nodes and branches. But, unlike hierarchical, each child node can be linked to multiple, higher parent nodes.
- Relational structure: Unlike network and hierarchical, a relational database organises its data in a series of related tables. This gives flexibility as relationships between the tables are built.
The following features are included in real time transaction processing systems:
- Good Data Placement: The database should be designed to access patterns of data from many simultaneous users.
- Short transactions: Short transactions enables quick processing. This avoids concurrency and paces the systems.
- Real-time backup: Backup should be scheduled between low times of activity to prevent lag of the server.
- High normalisation: This lowers redundant information to increase the speed and improve concurrency, this also improves backups.
- Archiving of historical data: Uncommonly used data is moved into other databases or backed up tables. This keeps tables small and also improves backup times.
- Good hardware configuration: Hardware must be able to handle many users and provide quick response times.
In a TPS, there are 5 different types of files, the TPS uses the files to store and organise its transaction data:
- Master file: Contains information about an organisation’s business situation. Most transactions and databases are stored in the master file.
- Transaction file: It is the collection of transaction records. It helps to update the master file and also serves as audit trails and transaction history.
- Report file: Contains data that has been formatted for presentation to a user.
- Work file: temporary files in the system used during the processing.
- Program file: Contains the instructions for the processing of data.
[edit] Data Warehousing
A data warehouse is a database that collects information from different sources. When it's gathered in real-time transactions it can be used for analysis efficiently if it's stored in a data warehouse. It provides data that is consolidated, subject-orientated, historical and read-only:
- Consolidated: Data is organised with consistent naming conventions, measurements, attributes and semantics. It allows data from a data warehouse from across the organisation to be effectively used in a consistent manner.
- Subject-orientated: Large amounts of data is stored across an organisation, some data could be irrelevant for reports and makes querying the data difficult. It organises only key business information from operational sources so that it's available for analysis.
- Historical: Real-time TPS represent the current value at any time, an example could be stock levels. If past data is kept, querying the database could return a different response. It stores series of snapshots for an organisation's operational data generated over a period of time.
- Read-only: Once data is moved into a data warehouse, it becomes read-only, unless it was incorrect. Since it represents a snapshot of a certain time, it must never be updated. Only operations which occur in a data warehouse are loading and querying data.
[edit] Backup Procedures
Since business organisations have become very dependent on TPSs, a breakdown in their TPS may stop the business' regular routines and thus stopping its operation for a certain amount of time. In order to prevent data loss and minimise disruptions when a TPS breaks down a well-designed backup and recovery procedure is put into use. The recovery process can rebuild the system when it goes down.
[edit] Recovery Process
A TPS may fail for many reasons. These reasons could include a system failure, human errors, hardware failure, incorrect or invalid data, computer viruses, software application errors or natural disasters. So it is logical to assume that it's not possible to keep a TPS from never failing, however because it may fail time to time, it must be able to cope with failures. The TPS must be able to detect and correct errors when they occur. A TPS will go through a recovery of the database to cope when the system fails, it involves the backup, journal, checkpoint and recovery manager:
- Backup: A backup copy is usually produced at least once a day. It should be stored in a secure location, protected from damage and loss.
- Journal: A journal maintains an audit trail of transactions and database changes. Transaction logs and Database change logs are used, a transaction log records all the essential data for each transactions, including data values, time of transaction and terminal number. A database change log contains before and after copies of records that have been modified by transactions.
- Checkpoint: A checkpoint record contains necessary information to restart the system. These should be taken frequently, such as several times an hour. It is possible to resume processing from the most recent checkpoint when a failure occurs with only a few minutes of processing work that needs to be repeated.
- Recovery Manager: A recovery manager is a program which restores the database to a correct condition which can restart the transaction processing.
Depending on how the system failed, there can be two different recovery procedures used. Generally, the procedures involves restoring data that has been collected from a backup device and then running the transaction processing again. Two types of recovery are backward recovery and forward recovery:
- Backward recovery: used to undo unwanted changes to the database. It reverses the changes made by transactions which have been aborted. It involves the logic of reprocessing each transaction - which is very time consuming.
- Forward recovery: it starts with a backup copy of the database. The transaction will then reprocess according to the transaction journal that occurred between the time the backup was made and the present time. It's much faster and more accurate.
[edit] Types of Back-up Procedures
There are two main types of Back-up Procedures: Grandfather-father-son and Partial backups:
[edit] Grandfather-Father-Son
This procedure refers to at least three generations of backup master files. Hence, the most recent backup is the son, the oldest backup is the grandfather. It's commonly used for a batch transaction processing system with a magnetic tape. If the system fails during a batch run, the master file is recreated by using the son backup and then restarting the batch. However if the son backup fails, is corrupted or destroyed, then the next generation up backup (father) is required. Likewise, if that fails, then the next generation up backup (grandfather) is required. Of course the older the generation, the more the data may be out of date. Organisations can have up to twenty generations of backup.^^
[edit] Partial Backups
This only occurs when parts of the master file are backed up. The master file is usually backed up to magnetic tape at regular times, this could be daily, weekly or monthly. Completed transactions since the last backup are stored separately and are called journals, or journal files. The master file can be recreated from the journal files on the backup tape if the system is to fail.
[edit] Updating in a Batch
This is used when transactions are recorded on paper (such as bills and invoices) or when it's being stored on a magnetic tape. Transactions will be collected and updated as a batch at when it's convenient or economical to process them. Historically, this was more widely used as the information technology did not exist to allow real-time processing.
The two stages in batch processing are:
- Collecting and storage of the transaction data into a transaction file - this involves sorting the data into sequential order.
- Processing the data by updating the master file - which can be difficult, this may involve data additions, updates and deletions that may require to happen in a certain order. If an error occurs, then the entire batch fails.
Updating in batch requires sequential access - since it uses a magnetic tape this is the only way to access data. A batch will start at the beginning of the tape, then reading it from the order it was stored; it's very time-consuming to locate specific transactions.
The information technology used inclues a secondary storage medium which can store large quantities of data inexpensively (thus the common choice of a magnetic tape). The software used to collect data does not have to be online - it doesn't even need a user interface.
[edit] Updating in Real-Time
This is the immediate processing of data. It provides instant confirmation of a transaction. This involves a large amount of users who are simultaneously performing transactions to change data. Because of advances in technology (such as the increase in the speed of data transmission and larger bandwidth), real-time updating is now possible.
Steps in a real-time update involve the sending of a transaction data to an online database in a master file. The person providing information is usually able to help with error correction and receives confirmation of the transaction completion.
Updating in real-time uses direct access of data. This occurs when data is accessed without accessing previous data items. The storage device stores data in a particular location based on a mathematical procedure. This will then be calculated to find an approximate location of the data. If data is not found at this location, it will search through successive locations until it's found.
The information technology used could be a secondary storage medium that can store large amounts of data and provide quick access (thus the common choice of a magnetic disk). It requires a user-friendly interface as it's important for rapid response time.