.

Thursday, February 28, 2019

Business Continuity Planning

Though interruptions to telephone line stooge be due to major natural disasters such(prenominal) as fires, floods, earthquakes and storms or due to man- do disasters such as wars, terrorist attacks and riots it is usu to each oney the more mundane and slight sensational disasters such as queen failure, equipment failure, theft and sabotage that ar the ca characters behind disruptions to business.A rail itinerary line pertinacity Plan or persistence of commerce Planning (CoB Plan) defines the address of identification of the industrys, customers (internal & external) and locations that a business plans to march on functioning in the item of such disruptive eventidets, as well the failover processes & the length of eon for such support. This encompasses hardw atomic number 18, softw are, facilities, personnel, communication links and applications (MphasiS, 2003).A Business pertinacity Plan is speculate in order to enable the organization to recover from a disaster with the lower limit loss of time and business by restoring its critical operations apace and smoothly. The Business Continuity Plan should be devised in such a way that it involves not sole(prenominal) the recovery, resumption and maintenance of only the applied science components plainly also of the entire business. recuperation of only the ICT systems and theme may not always imply the full restoration of business operations.The Business Recovery Planning at XE thitherfore envisages the con sloperation of whole risks to business operations that may include not only ICT applications and infrastructure but also directly wallop on former(a) business processes. After conducting an extensive Business extend to Analysis (BIA), Risk sound judgement for XE was carried by by evaluating the assumptions made in BIA to a lower place unhomogeneous threat scenarios. Threats were analyzed on the basis of their potential dissemble to the organization, its customers and the mone tary market it is associated with.The threats were then prioritized thinking on their severity. The following threats were identified for XE 1. lifelike disasters such as floods, fires, storms, earthquakes, extreme weather, etc. 2. Man-made disasters such as terrorist attacks, wars and riots. 3. Routine threats that include a. Non-availability of critical personnel b. Inaccessibility of critical buildings, facilities or geographic regions c. Malfunctioning of equipment or hardware d. Inaccessibility or rotting of software and learning due to various reasons including virus attacks e. Non-availability of support servicesf. mishap of communication links and other essential utilities such as power g. Inability to meet financial liquidity requirements, and h. Unavailability of essential records. Organizing the BCP group The first and most important step in developing a successful disaster recovery plan is to create focusing awareness. The top-level management volition allocate obligatory resources and time undeniable from various areas of the organizations only if they understand, realize and support the value of disaster recovery. The management has to also treaty approval for final implementation of the plan.The BCP group therefore has to have a member from the management who evoke not only provide the inputs from the management but also apprise the management and get its feed backrest. Besides these, each core or anteriority area has to be represented by at least one member. Finally, there has to an overall Business Continuity Plan coordinator who is responsible not only for co-ordination but also for all other aspects of BCP implementation such as training, updating, creating awareness, visitationing, etc. The coordinator usually has his or her own support team.XEs Business Continuity Planning team would therefore comprise representatives from the management and each of the core or priority areas, and would be held together by the BCP coordinat or. Even in the type of outsourcing of the BCP, it is necessary for the management and nominated members from the core or priority areas to be well-nigh associated with each step of the planning process. Crucial Decisions The key decisions to be made in formulating the Business Continuity Plan for XE were associated with the individual move that were undertaking in making the BCP.The first step of Business Impact Analysis (BIA) winding making a bring in flow outline to assess and prioritize all business functions and processes including their interdependencies. At this head, the potential impact of business disruptions was identified along with all the legal and regulatory requirements for XEs business functions and processes. Based on these, decisions on allowable squandertime and unexceptionable level of losses were interpreted. Estimations were made on Recovery Time Objectives (RTOs), Recovery Point Objectives (RPOs) and recovery of the critical path.The second step of B usiness Continuity Planning comprised of risk assessment during which business processes and the assumptions made in the lead of BIA were evaluated using various threat scenarios. The decisions made at this stage include the threat scenarios that were to be adopted, the severity of the threats and finally identification the risks that were to be considered in the BCP based on the assessments made. The next step of Risk Management involved drawing up of the plan of action with respect to the various risks.This was the stage at which the actual Business Continuity Plan was drawn up, formulated and documented. Crucial decisions such as what specific steps whould be interpreted during a disruption, the training programs that should be organized to train personnel in implementation of the BCP, and the frequency of updating and revisions that would be required were taken at this stage. Finally, in the Risk Monitoring and Testing stage, decisions regarding the suitability and effectivene ss of the BCP were taken with reference to the initial objectives of the Business Continuity Plan. Business Rules and System Back-upsMy promoter works for the beat back Vehicles department that turn ups tearaway(a) emancipations for private and commercial vehicles. Appli tooshiets for each license initially seeded player and deposit a fee. The particular(a)s of the specific applier along with photograph and biometrics in the form of finger prints are then entered into the informationbase. There later, the applicant bear up underes a aesculapian test, the results of which are again entered into the selective informationbase of the system. If ratified in the medical test, the applicant has to appear for an initial theoretical test on private road signs and rules and regulations.If the applicant passes the test, he or she is given a Learners License. The applicant then comes back for the practical driving test after a month, and is awarded the driving license if he or s he is able to pass the test. New additions are made to the database of the driving license system at every stage of this workflow. Though the tests for the learners license and driving license are held three days in a week, an individual can apply any(prenominal) day of the fivesome working days of the department. People also come for transformation of driving licenses.Driving licenses are usually issued for a period of one to five historic period depending on the age and physical condition of the applicant. In the case of commercial vehicles, an applicant first has to obtain a trainee driving license and work as an apprentice driver for two years before he or she becomes eligible for a driving license to drive a commercial vehicle. Moreover, a commercial driving license is issued only for a year at time, and the driver has to come back for evaluation and medical tests every year.The number and frequency of transactions are therefore such(prenominal) higher for commercial vehic les. As is evident from the business rules of the department, data is added and circumscribed frequently for a specific applicant during the process of the initial application. Subsequently, data is again added to or the database modified after an interval of one month for the same applicant. Thereafter, fresh data is added to the database or the database modified only after a period of five years when the applicant comes back for renewal.However, there is always the possibility that someone loses or misplaces his or her license and comes back to have a duplicate issued. But when the scenario of multiple applicants who can come in at any day for fresh, duplicate or renewal of licenses is considered, it becomes evident that transactions are not periodic or time bound but are continuous. Transactions can happen any time during working hours resulting in changes to the database of the system. Taking only complete ministration of the system would not be the optimum business solution under the given circumstances.Whatever frequency of complete backup is adopted, the chance of losing data will be very high in the case of database failure or any other disastrous event that results system failure or corruption. Moreover, taking complete backup of the system very frequently would be a laborious and cumbersome exercise. The ideal backup method in this case would be incremental backup in which backup is taken of only the data that is added or modified the moment it is added or modified, and a complete backup is taken at a periodic frequency.Under the situation, the Motor Vehicles Department has opted for continuous incremental backup with a complete backup taken at the end of the day. As a Business Continuity Plan measure, the department uses a remote backup mirroring solution that provides host-based, real-time continuous restitution to a disaster recovery site further away from their servers over standard IP networks. This mirroring process uses continuous, async hronous, byte-level replication and captures the changes as they occur. It copies only changed bytes, therefore reducing network use, enabling quicker replication and reducing latency to a great extent.This remote mirroring solution integrates with the animate backup solutions, and can replicate data to perform remote backups and can take snapshots at any time without having any impact on the performance of the production severs. It replicates over the available IP network, both in LAN and WAN, and has been deployed without any additional cost. This remote mirroring solution accords the department the upper limit possible safeguard against data loss from failures and other disasters. Database Processing competency versus Database Storage EfficiencyThough store costs as such has decreased dramatically over the years, the controversy in the midst of database process susceptibility and database storage competency continues to be an issue because the overall performance of a syst ems is affected by the way data is stored and processed. In other words, even though the rule book of storage set available may no longer be a constraint financially and physically, the way this space is utilized has an impact on the database touch efficiency which in turns affects the overall performance of the application or the system.Under the present circumstances, though it is possible to compromise on the side of database storage efficiency to derive greater database performance efficiency and thereof improve the overall performance of the system, achieving optimization of the overall performance of a system requires striking a fine balance between database processing and database storage efficiency. There can be many tradeoffs between data processing speed and the efficient use of storage space for optimal performance of a system. There is no set rule on which tradeoffs to adopt, and differs according to the practical data creation, modification and flow of the system.Ce rtain broad guidelines can however be followed in order to increase the overall public-service corporation of the database management system. Examples of such guidelines are to be found in the case of derived business lines, denormalization, primary key and indexing overheads, reloading of database and query optimization. Derived palm Derived field are the fields in which data is obtained after the manipulation or operation of two or more received fields or data. The issue at stake is whether the data should be stored only in the original form or as the processed data in derived field also.When the data is stored only in the original form, the derived field is calculated as and when required. It is obvious that storing derived data will require greater storage space but the processing time will be comparatively less i. e. storage efficiency will be low whereas processing efficiency becomes higher. However, the decision on whether to store derived fields or not depend on other co nsiderations such as how often the calculated data is likely to change, and how often the calculated data will be required or used. An example will attain will serve to make matters more clear.A university students grade point standing is a perfect example of the derived field. For a specific class, a students grade point is obtained by multiplying the points corresponding to the grade of the student by the number of credit hours associated with the course. The points or the grade and the number of credit hours are therefore the original data, by multiplying which we get the grade point or the derived field. The decision on whether to store the derived field or not, will in this case depend on how often the grade point of a student is likely to change, and how often the students grade points are actually required.The grades of a student who has already graduated is unlikely to undergo nay more changes, whereas the grades of a student still studying in the university will change at regular frequency. In such a case, storing the grade points of an undergraduate would be more meaningful than storing the grade points of a student who has already graduated. Again if the undergraduates grades are account only once a term, then it may not be worth it to store grade points as derived fields. The significance of the matter is realize when we consider a database of thousand of students.The tradeoff in this case is between storing the grade points as derived fields and gaining on database processing efficiency and losing out on database storage efficiency on one hand and not storing the derived fields and gaining on storage efficiency but losing out processing efficiency on the other. Denormalization Denormalization is a process by which the number of records in a fully normalized database can be considerably reduced even while adhering to the rule of the First Normal Form that states that the intersection of any row with any column should result in a iodine(a) data value.The process of cutting down multiple records into a single record is applicable only in certain specific cases in which the number and frequency of transaction is known. Normalization of a database is required to keep up the accuracy and integrity of the data which in turn leads to validity of the reports generated by the system and the reliability of the of the decisions based on the system. Denormalization, if done randomly can upset the balance of the database while economizing on storage space. The dilemma of list Unplanned use of primary key has a telling proscribe affect on the database storage efficiency.Many database systems resort to setting index on a field. When a field is index, the system sets a pointer to that particular field. The pointer helps in processing the data much faster. However, indexing fields also results in the system storing and maintaining data but also information or data about the storage. The question therefore again boils down to deciding on whether to achieve higher processing efficiency by compromising storage efficiency or to enable higher storage capabilities at the cost of processing efficiency.Sorting the data periodically is one way of overcoming the dilemma of indexing. However, sorting itself is highly taxing on the resources of a system. Moreover, in large organizations with millions of data, a sort may take even up to hours during which all computer operations remain suspended. Other factors Storage efficiency and processing efficiency are also interdependent in other ways. The deletion of data without reloading the database from time to time may result in the deleted data actually not being removed from the database.The data is but hidden by setting a flag variable or marker. This results not only in low storage efficiency but also in low processing efficiency. Reloading a database removes the deleted data for good from the database and leads to smaller amount of data and a more efficient use if resourc es thereby boosting processing efficiency. Similarly, haphazard coding structures can impact negatively both on the storage efficiency and the processing efficiency of a database. Completely ignoring storage efficiency while prioritizing processing efficiency, can never lead to database optimization.Conversely, optimization can also never be achieved by an over emphasis on storage efficiency. The objective is to strike the rightfulness balance. The interrelationships between database storage efficiency and database processing efficiency therefore keep the controversy between the two alive in spite of a dramatic decrease in storage costs over the years. References -01 MphasiS Corporation, 2003, MphasiS hazard Recovery/Business Continuity Plan, Online Available. http//www. mphasis. com June 27, 2008

No comments:

Post a Comment