Sunday, January 26, 2020

Benefits of Different Oxygen Levels Administered in ICU

Benefits of Different Oxygen Levels Administered in ICU ABSTRACT: There have been numerous studies conducted to identify the benefits of different oxygen levels administered in ICU (Intensive Care Unit) patients. However, the studies do not reveal a definitive conclusion. The proposed systematic review plans to identify if either conventional or conservative oxygen therapy methods is more constructive in critically ill adult patients who are admitted in ICU. BACKGROUND Oxygen therapy is a treatment that provides oxygen gas to aid breathing when it is difficult to respire and became a common form of treatment by 1917. (Macintosh et.al 1999). It is expended for both acute and chronic cases and can be implemented according to the needs of the patient either in hospital, pre-hospital or entirely out of hospital based on their medical professionals opinions. It was established as the most efficient and safest medicines required by the health system by World Health Organisation (WHO). PaO2 has become the guideline test for finding out the oxygen levels in blood. And by the 1980s, pulse oximetry method which measures arterial oxygen saturation was also progressively used alongside PaO2 (David 2013). The chief benefits of oxygen therapy comprise slowing the progression of hypoxic pulmonary hypertension, emotional status, cognitive function and improvements in sleep (Zielinski 1998). In UK, according to the national audit data about 34% of ambulance journey s involve oxygen use at some point while 18% of hospital inpatients will be treated with oxygen at any time (Lo EH 2003). In spite of the benefits of this treatment, there have been instances where oxygen therapy can negatively impact a patients condition. The most commonly recommended amount of saturation for oxygen intake is about 94-98%, and saturation levels of about of 88-92% are preferred for those at risk of carbon dioxide retention (BMA 2015). According to standard ICU practice, the conservative method denotes that patients receive oxygen therapy to maintain PaO2 between 70 and 100 mm Hg or arterial haemoglobin saturation between 94-98% while conventional method allow PaO2 values to rise up to 150 mm Hg or SpO2 values between 97% and 100% (Massimo et al. 2016).There are also low flow systems where the delivered oxygen is at 100% and has flow rates lower than the patients inspiratory flowrate ( i.e., the delivered oxygen is diluted with room air) and, hence the Fraction of Inspired Oxygen(FIO2) may be low or high. However, this depends on the particular device and the patients inspiratory flowrate. AIM To investigate and conclude whether the use of a strict protocol for conservative oxygen supplementation would help to improve outcomes, while maintaining PaO2 within physiologic limits among critically ill patients. RESEARCH QUESTION A well- defined, structured and exclusive research question will lead as a guide in making meticulous decisions about study design and population and consequently what data can be collected and used for analysis.(Brian, 2006) The early process of research for finding the research questions is a challenging task as the scope of the problem is bound to be broad. Significant time and care is needed to polish, extract and compare the information required from the vast sea of information (Considine 2015) .If a proper and specific research question is not formed, the whole process will be useless (Fineout-Overholt 2005). The fundamental success of any research project is attributed in establishing a clear and answerable research project that is updated with a complete and systematic review of the literature, as outlined in this paper. A PICO framework is a universally used framework used to develop a robust and answerable research question which is also a useful framework for assuring the quality or for evaluating projects. PICO stands for Problem / Population, Intervention, Comparison, and Outcome. The research question presented in this paper is to identify whether conventional or conservative oxygen therapy methods is more beneficial among critically ill adult patients admitted in Intensive Care Unit. LITERATURE REVIEW The literature has focused on the effect of conservative and conventional oxygen therapy methods on mortality among patients in an Intensive Care Unit. Although there have been several studies to analyse which of the two methods is more beneficial to critically ill patients, a definitive study which determines the mortality rate among the different categories needs to be analysed and investigated for its benefit. Different devices used to administer Oxygen: Nasal cannula provides about 24-40% oxygen and flow rates up to 6L/min in adults (Fulmer JD 1984). A basic oxygen mask delivers about 35-50% FIO2 and can have flow rates from 5-10L/min depending on the fit and requirement of flow rate. The other respiratory aiding device is a partial rebreathing mask which has an additional reservoir bag with it which is also classified as a low flow system with flow rate of 6-10L/min and delivers about 40-60% oxygen. The non-breathing system is similar to the partial rebreathing mask, where it has an additional series of one way valves and it delivers about 60-80% FIO2 with a flow rate 10L/min. Review and findings of different oxygen therapy studies: A systematic review of two different published Journals indicated that the usage of additional oxygen when managing acute myocardial infarction arrived at the same result: that there is no significant benefit when oxygen therapy is administered while being assessed with air breathing (Cabello 2010) and it may in fact be damaging which results in greater infarct size and higher mortality rate (Wijesinghe 2009). Although a number of smaller studies could clarify the reviews, none of the original studies could reach a statistically substantial result ( Atar 2010); this stresses the need to provide data that validates the requirement for further analysis. Studies to support this have already been started, where The AVOID (Air Versus Oxygen In Myocardial Infarction) study is presently hiring patients to resolve this critical medical question (Stub 2012).Actual clinical trial data suggesting the effects of varied inspired oxygen levels are even more inadequate in acute ischemic stroke. It is proposed that oxygen therapy may be beneficial if administered within the first few hours of commencement, however it has also been observed that with continued administration, it may induce harmful results (higher 1-yr mortality) (Ronning 1999). In a survey of group study where more than 6,000 patients were case studied following resuscitation from cardiac arrest , hyperoxemia ( defined as a PaO2 > 300 mm Hg (40 kPa),the results obtained were considerably worse than both normoxemia (60-300 mm Hg (8to 40kPa) and hypoxemia (PaO2 There is also no robust proof for the postulation that an increased PaO2 is interrelated with improved long-term survival in critically ill patients( Young JD2000).A reflective study where more than 36,000 patients were considered and arterial oxygenation was administered while being mechanically ventilated, signs of a biphasic relationship was observed within a span of 24 hours between PaO2 and in-hospital mortality(De 2008).The average PaO2 level found was 99mm Hg, yet the foundation for unadjusted hospital mortality was just below 150mm Hg. A very similar study of more number of patients was conducted in Australia and New Zealand and this resulted in a report recording a mean PaO2 of 152.5mm Hg, indicating supraphysiological levels of oxygenation, with 49.8%of the 152,680 group was categorised as hyperoxemic PaO2>120mmHg(Eastwood , 2012). In contrast to the Dutch study, even though hypoxemia was associated with elevated mortality, after an adjustment of disease severity, a progres sive association between progressive hyperoxemia and in-hospital mortality could not be linked together effectively. (Martin 2013). The assumption that patients with hypoxemia secondary to ARDS (acute respiratory distress syndrome) respond positively to elevated arterial oxygenation reinforces many studies done in this field (McIntyre 2000). Nevertheless, data from clinical trials in patients with ARDS seem to disregard this assumption as frequent oxygenation and long-term outcome have a disconnection (Suchyta 1992). And the studies that report a correlation arterial oxygenation and mortality, a systemic review of 101 clinical studies in ARDS patients came to the conclusion that P/F ratio was not such a reliable predictor (Krafft 1996). Thus a more intense study was conducted to compare the supplementary oxygen therapy with no oxygen therapy in normoxic patients with ST Segment elevation myocardial infarction (STEMI). Oxygen therapy has been known to be only universally used for the initial treatment of patients with STEMI which is based on the belief that the additional oxygen may increase oxygen delivery to isc hemic myocardium and hence reduce myocardial injury and is supported by laboratory studies done by Atar in 2010. The adverse effects of supplementary oxygen therapy were noted from a meta-analysis of 3 small, randomized trials as done by Cabello in the same year. More recently, another analysis was done by comparing high concentration oxygen with titrated oxygen in patients with suspected acute myocardial infarction which found no difference in myocardial infarct size on cardiac magnetic resonance imaging (Ranchord 2012). Hence, there are no studies that assess the effects of supplemental oxygen therapy in the setting of contemporary therapy for STEMI, specifically acute coronary intervention. With these reports and analysis put together, we can safely deduct that there remains a substantial amount of uncertainty over the usage of routine supplemental oxygen in uncomplicated Acute Myocardial Infarction, with absolutely no clear indication or recommendation for the level of oxygen th erapy in normoxic patients in the STEMI guidelines. More recently, another analysis was done by comparing high concentration oxygen with titrated oxygen in patients with suspected acute myocardial infarction which found no difference in myocardial infarct size on cardiac magnetic resonance imaging (Ranchord 2012). The annual congress of European Society of ICU (2016) states that patients dying in the ICU was lowered by 9% while using conservative oxygen strategy as compared with the conventional one(JAMA 2016). METHODOLOGY Firstly the terms method and methodology needs to be differentiated. Method is a process used to collect and examine the data whereas methodology includes a philosophical inquiry of the research design as stated by Wainworth (1997). It is vital that the suitable methodology needs to be analysed in carrying out the research question and in assembling the data (Matthews 2010). Research Methodology is a way to find out the result of a given problem on a specific matter or problem that is also referred as research problem (Jennifer 2011). In Methodology, researcher uses different criteria for solving the given research problem and always tries to search the given question systematically in their own way to find out all the answers till conclusion. If the research does not work systematically on the problem, there would be less possibility to find out the final result. For finding or exploring research questions, a researcher faces lot of problems that can be effectively resolved while us ing a correct research methodology (Industrial Research Institute, 2010). This research proposal was done under the systematic review method because it provides a very comprehensive and clear way of assessing the evidence (Chalmers 2001). Also it lowers error and bias and establishes a high standard of accuracy (Jadad, 1998). Healthcare providers, researchers, consumers and policy makers are overwhelmed with the data, evidence and information available from healthcare research. It is unlikely that the all this information is digested and used for future decisions. Hence a systematic review of such research will help to identify, assess and synthesize the information based on evidence needed to make those critical decisions. (Mulrow 1994). There are a number of factors for choosing systematic review for this study. A systematic review is generally done to resolve mismatched evidence, to verify the accuracy of current practice, to answer clinically unanswered questions, to find changes in practice or to focus for the need for any future research. Systematic reviews[AD1] are increasingly being used as a preferred research method for the education of post graduate nursing students (Bettany- Saltikuv, 2012). One of the best resources available on the conduct of systematic reviews of interventions is the Cochrane Collaboration (Tonya 2012). As defined by the Cochrane Collaboration (Higgins Green, 2011[AD2], Pg 6); A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made. The aim of a systematic review is to incorporate the existing knowledge into a particular subject or regarding a scientific question (British Journal of Nutrition (2012). According to Gough et al (2012) a systematic review is a research method that is undertaken to review several relevant research literatures. Systematic reviews can be considered as the gold standard for reviewing the extensive literature on a specific topic as it synthesises the findings of previous research investigating the same or similar questions (Boland et al 2008). Using systematic and rigorous methods systematic reviews are often referred to as original empirical research because they review primary data, which can be either qualitative or quantitative (Aveyard Sharp 2011). Over the past years, various standards have been evolved for portraying systematic reviews, staring from an early statement called the QUOROM guidelines to an updated widely accepted statement called the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (Moher et al, 2009). While there are many differences in how each author approach a systematic review and there is no universal approach on one methodology for conducting review. However there are a set of fundamental set regarding the report of systematic reviews that authors are recommended to follow (Tonya 2012). METHODS SEARCH STRATEGIES: The selection of relevant study is based on two concepts: sensitivity and specificity (Wilma 2016).The purpose of the literature search is to identify existing published research in the particular area of interest to assist the researcher to clarify and specify the research question, and to identify whether the research question has been answered. The search of the literature must be strategic and systematic, and informed by a documented strategy. Search strategies have two major considerations: search terms, and databases. Some of the most common and beneficial search strategies used in systematic reviews are using the database of Cochrane Central Register of Controlled Trials (CENTRAL), hand searching, Grey literature which contains unpublished studies, clinical trials and ongoing research on the trials. Contacting an expert and extracting information is another useful method. The internet provides access to a huge selection of published and unpublished database. Studies can also b e found by referring the reference lists of the available published data. The database that have been referenced in this paper have been searched, collected and for extraction from the vast base of Northumbria [AD3]University accessible Journals. Journals from Medline, Ovid, ELSEVIER, PubMED and Cochrane Central Register of Controlled Trials, Journal of the American Medical Association( JAMA), newspaper articles from CHEST, Intensive Care Medicine ,CLOSE and ANZICS Clinical trial group, Resuscitation, Critical care journal, (all of the selected journals from the databases was validated as peer reviewed journals) were reviewed for this paper. INCLUSION AND EXCLUSION CRITERIA The inclusion of unpublished and grey literature is essential for minimizing the potential effect of publication bias (Cochrane Corner 2007). If systematic reviews are limited to published studies, they risk excluding vital evidence and yielding inaccurate results, which are likely to be biased as always positive results (Alderan 2002). The inclusion criteria should consider gender, age of participants, year(s) of publication and study type. For this review purpose, as conventional and conservative oxygen therapy studies are the primary research questions, patients aged 18 years or older and admitted to the Intensive Care Unit (ICU) with an expected length of stay of 72 hours or longer were considered for inclusion. Exclusion criteria also need to be justified and detailed and papers may be excluded according to paper type (such as discussion papers or opinion pieces), language, participant characteristics, or year(s) of publication. For the exclusion criteria, patients under 18 years, pregnant patients, and those who were readmitted in ICU, patients with DNACPR (do not actively resuscitate) and neutropenia or immunosuppression and the patients on who more than one arterial blood gas analysis was performed in 24 hours. STUDY SELECTION For the purpose of this research proposal the literature selected are based on Randomized Clinical Trials of conservative oxygen therapy methods and conventional (traditional) [AD4]oxygen therapy methods used in ICU and some systematic reviews of effective oxygen therapy in ICU, if they met the inclusion criteria. The controlled clinical trials provide the most appropriate method of testing effectiveness of treatments (Barton 2000). Observational studies on effect of hyperopia on post cardiac arrest are also reviewed. These studies can help to determine whether conservative oxygen therapy can help increase mortality among critically ill patients. PREPARATION FOR DATA EXTRACTION Data will be[AD5]extracted from the studies and grouped according to outcome measure. The data extraction tools should be used to ensure relevant datas is collected, minimise the risk of transcription errors, allow accuracy of data to be checked and serve as a record of the data collected. The data collected for extraction should be validated against evidence. It is necessary to extract the necessary studies and data that will help in resolving the research question which involves analysing different studies and a preferred way of methodology that reduces errors and bias. QUALITY ASSESSMENT Cochrane risk of bias tool (Higgins2011) will be[AD6]used for the assessment of risk of bias in estimating the study outcome. For the better outcome of this review involved few randomized clinical trials, some observational studies and pilot RCT studies for comparison among various methods. Quality assessment is given special importance because of the inclusion of RCT and non-RCT methodology (Eggers et al 2001). And only quality studies that satisfies the inclusion, exclusion and data requirements, validity and no bias and studies that are needed to answer the research question are carefully selected. SYNTHESIS STUDIES Synthesis helps to summarize and connect different sources to review the literature on a specific topic, give suggestions, and link the practice to the research (Cosette 2000). It is done by gathering and comparing evidence from a variety of sources when there is conflicting evidence or limited number of patients or large amounts of unclassified data. Systematic reviews of RCTs(Randomized control Trial) encompass the most strong form of clinical evidence (Sheik 2002) and occupies the highest layer in the hierarchy of evidence-based research, at the same time qualitative case studies and expert opinions occupy the lowest layer (Evans 2003 and Frymark et al 2009). RCT helps to understand the differences data among various studies (For Example, the studies considered here, conventional versus Conservative Oxygen therapy methods). RCT is the most applicable study used in assessing the results of an intervention, because it limits the effects of bias when performed correctly. (CRDs Guide 2009). It also easier to understand and any observed effect is easily contained to the treatments being compared. (Stuart 2000). The favourable results of an RCT lies with the methodology domain followed in the trial and it reviews its practicality which helps healthcare professionals, clinicians, researchers, policymakers and guideline developers to apply and review the effectiveness of the trials and tests. For example, if a study overestimates the effects of an intervention, it concludes wrongfully that the intervention works; similarly if the study is underestimating the effects, it wrongfully reflects that there is no effect to that study. This is where RCTs stands out, where minimum bias and evidence is the basis of such a study (According to Cochrane reviews). Hence this is why RCTs form the gold standard of comparison studies while questioning effectiveness of different interventions while limiting bias. As an example, groups that are randomly assigned are different from groups that follow criteria in the sense that the investigator may not be aware of certain attributes that they might have missed. It will also be likely that the two groups will be the similar on significant characteristics using chance. It is possible to control the factors that are known but randomisation helps to control the factors that are not known, which drastically reduces bias. Therefore assigning participants in other study designs may not be as fair and each participant may vary in characteristics on main standards. (Cochrane Handbook for Systematic Reviews of Interventions 2017) The observational studies or non-randomised studies can be argumentative as the choice of treatment for each person and the observed results may cause differences among patients being given the different types of treatments. (Stuart 2000). ETHICAL CONSIDERATION A systematic review is the scientific way of classifying the overabundant amount of information existing in research by systematically reviewing and accurately examining the studies concerning a particular topic. But in doing so, topic of ethics is hardly questioned. This will have some major downsides as some systematic reviews may have studies with ethical deficiencies, which in turn lead to the publication of an unethical research and such research is susceptible to bias. Systematic review does not automatically give the updated approval for an original study. Hence systematic reviews that are methodically and ethically assessed will have better ethical and methodological studies overall (Jean et al 2010). If an original study does not mention the ethical issues, it does not automatically mean that the studies in original papers avoided those ethical concerns and may indicate a lower risk (Tuech 2005).A primary rule for publishing articles is that redundant and overlapping data sh ould be avoided or needs to be cross-referenced while making the purpose clear to the readers in an unavoidable case. (Elizabeth et al 2011). Plagiarism is clearly unacceptable and care should be taken care to not replicate other peoples research work and the original words and data needs to be acknowledged as a citation or quote. A responsible publisher should follow the COPE (Committee on Publication Ethics) flowchart that explains suspected plagiarism (Liz 2008). It is also important to give information on funding and competing interests. The Cochrane Collaboration (2011) has very strict rules about funding and it is important to give reasons why the author may or may not be neutral or impartial on the review prepared and it relates to financial support, while competing interests can be personal, academic or political (WAME Editorial Policy and Publication Ethics Committees 2009). REFLECTION The objective of systematic reviews is to translate the results to clinically useful and applicable information while meeting the highest methodological standards. They offer a very useful summary of the present scientific evidence on a particular domain which can be developed into guidelines on the basis of such evidence. However, it is imperative that practitioners understand the reviews and the quality of the methodology and evidence used (Franco 2012). This study proposes to find the systematic review approach of conservative and conventional oxygen therapy methods used among critically ill adult patients in ICU. Incidentally, a RCT study by Susan (2016) found that the strategy of conservatively controlling oxygen delivery to patients in ICU results in lower mortality than the conventional and more liberal approach whereby patients are often kept in a hyperoxemic state.

Saturday, January 18, 2020

Ethical Healthcare Issues

Running Head: ETHICAL HEALTHCARE Ethical Healthcare Issues Paper Wanda Douglas Health Law and Ethics/HCS 545 October 17, 2011 Nancy Moody Ethical Healthcare Issues Paper In today’s health care industry providing quality patient care and avoiding harm are the foundations of ethical practices. However, many health care professionals are not meeting the guidelines or expectations of the American College of Healthcare Executives (ACHE) or obeying the organizations code of ethics policies, especially with the use of electronic medical records (EMR). Many patients fear that their personal health information (PHI) will be disclosed by hackers or unauthorized users. According to Carel (2010) â€Å"ethical concerns shroud the proposal in skepticism, most notably privacy. At the most fundamental level, issues arise about the sheer number of people who will have ready access to the health information of a vast patient population, as well as about unauthorized access via hacking. †Ã‚  This paper will apply the four principles of ethics to EMR system. EMR History Pickerton (2005), â€Å"In the 1960s, a physician named Lawrence L. Weed first described the concept of computerized or medical records. Weed described a system to automate and recognize patient medical records to enhance their utilization and thereby lead to improved patient care† (para 1). The advantages of EMR system includes shared information integrated information, improvement of quality care, and adaptation of regulatory changes. Even though EMR systems have many advantages, EMR systems also have some disadvantages too. Some disadvantages of EMR systems are security, and confidential, which can raise ethical issues. In order to help identify and vercome ethical issues with EMR systems, health care professionals can use the four principles of ethics to help identify where ethical issues are compromised. The four principles of ethics are autonomy, beneficence, nonmaleficence, and justice. Autonomy According to Mercuri (2010) â€Å"autonomy means allowing individuals make their own choices and develop their own lives in the context of a pa rticular society and in dialogue with that society; negatively, autonomy means that one human person, precisely as a human person, does not have authority and should not have power over another human person† (para 2). Autonomy has an effect with ethics concerning EMR systems because health care organizations should have an EMR system that should maintain respect for patient autonomy. Respect for patient autonomy should have health care organizations to make decisions concerning user access of the records. Access of Records Before a health care organization implements an EMR system, they should have a security system in place, which includes â€Å"access control† component. Access control within an EMR system is controlled by distinct user roles and access levels, the enforcement of strong login passwords, severe user verification/authorization and user inactivity locks. Health care of professionals regardless of their level, each have specific permissions for accessing data. Even though the organization have the right security system in place to prevent unauthorized users from access patient records, autonomous patients will expect to have access to his or her records with ease. Access their record will ensure that their information is correct and safe. Beneficence According to Kennedy (2004) â€Å"beneficence is acting to prevent evil or harm, to protect and defend the rights of others to do or promote good† (p. 501). Beneficence has an effect with ethics when it comes to EMR systems because health care professionals can help to improve the health of individual patients by using patient records to help with medical research. EMR systems contain an enormous amount of raw data, which can innovate public health and biomedical research. This research will not only do good to help the health of individual patients, but also to the health of society (Mercuri, 2010). As a result, as new EMR systems are designed, patients should be given the ability to release information from their EMRs to researchers and scientists. Nonmaleficence Not only does beneficence have an effect with ethics concerning EMR systems, but also nonmaleficence. According to Taber’s Cyclopedic Medical Dictionary â€Å"The principle of not doing something that causes harm. Hippocrates felt this was the underpinning of all medical practice. He advised his students,  primum non nocere  (â€Å"first, do no harm†)† (â€Å"Nonmaleficence,† 2010). Nonmaleficence has an effect with ethics concerning EMR systems because it is the employee’s responsibilities to report any negligence or fraud of patient medical records. However, if an employee doesn’t report negligence or fraud it will cause harm to the organization and to the patient. Reporting negligence will make the organization aware of the problem and help them find a solution. Employees can help prevent negligence or fraud notifying management when a problem is discovered. Employees can also help prevent negligence or fraud by making sure that their system access information is secure. In addition, employees can also help prevent negligence or fraud by making sure that they are creating accurate records. If the employees follow these policies of EMR security systems, they will ensure that the patient medical records are secure and safe from harm. Justice Not only does nonmaleficence have an effect with ethics when it comes to EMR systems, but also justice. According to Mercuri (2010) â€Å"justice is commonly defined as fairness. With respect to health care, justice refers to society’s duty to provide its members with access to an adequate level of health care that fulfills basic needs† (para 5). Justice has an effect with ethics concerning EMR systems because EMRs are most helpful when the system is easy to use, fully integrated, and easily searchable. EMR systems have the potential to assist health care organizations by providing higher quality care to the users and to the patients. In addition, EMR systems also assist health care organizations by having a system that is more unbiased through advanced effectiveness. Conclusion Even though there are still some ethical issues with EMR systems, health care professionals are moving in the right direction by being more aware. Health care professional want to do the right thing by following the organizations code of ethics, but sometimes they are not always clear on how they should handle certain EMR systems situations properly. In order for health care professionals to handle certain EMR systems situations properly, they can use the ACHE as a reference. Using ACHE as a reference ensures that they are meeting ACHE standards. Health care professionals can also apply the four principles of ethics to determine a resolution. Applying the four principles of ethics ensures that they are following the proper protocols and guidelines and leaves considerable room for judgment in certain cases. Reference Carel, D. (2010, October). The Ethics of Electronic Health Records. Yale Journal of Medicine Law, VII (1), 8-9. Kennedy, W. (2004). Beneficence and autonomy in nursing: a moral dilemma. British Journal of Perioperative Nursing, 14(11), 500-506. Retrieved from EBSCOhost. Mercuri, J. (2010). The Ethics of Electronic Health Record. Retrieved from http://www. clinical correlations. org/? p=2211 Nonmaleficence. 2010. Taber’s Cyclopedic Medical Dictionary, 21st ed, Retrieved from EBSCO host. Pickerton, K. (2005). His tory of Electronic Medical Records. Retrieved from http://ezinearticles . com/? History-Of-Electronic-Medical-Records&id=254240

Friday, January 10, 2020

Risc & Pipelining

What is RISC Architecture? * RISC stands for Reduced Instruction Set Computer. * An Instruction set is a set of instructions that helps the user to construct machine language programs to do computable tasks. History * In early days, the mainframes consumed a lot of resources for operations * Due to this, in 1980 David Paterson, University of Berkeley introduced the RISC concept. * This included fewer instructions with simple constructs which had faster execution, and less memory usage by the CPU. * Approximately a year was taken to design and fabricate RISC I in silicon * In 1983, Berkeley RISC II was produced.It is with RISC II that RISC idea was opened to the industry. * In later years it was incorporated into Intel Processors * After some years, a revolution took place between the two Instruction Sets. * Whereby RISC started incorporating more complex instructions and CISC started to reduce the complexity of their instructions. * By mid 1990’s some RISC processors became mo re complex than CISC! * In today’s date the difference between the RISC and CISC is blurred. Characteristics and Comparisons * As mentioned, the difference between RISC and CISC is getting eradicated. But these were the initial differences between the two.RISC| CISC| Fewer instructions| More (100-250)| More registers hence more on chip memory (faster)| Less registers| Operations done within the registers of the CPU| Can be done external to CPU eg memory| Fixed length instruction format hence easily decoded| Variable length| Instruction execution in one clock cycle hence simpler instructions| In multiple clock cycles| Hard wired hence faster| Micro programmed| Fewer addressing modes| A variety| Addressing modes : Register direct. Immediate addressing, Absolute addressing Give examples on one set of instructions for a particular operation, Instruction Formats ttp://www-cs-faculty. stanford. edu/~eroberts/courses/soco/projects/2000-01/risc/risccisc/ Advantages and Disadvantages * Speed of instruction execution is improved * Quicker time to market the processors since few instructions take less time to design and fabricate * Smaller chip size because fewer transistors are needed * Consumes lower power and hence dissipates less heat * Less expensive because of fewer transistors * Because of the fixed length of the instructions, it does not use the memory efficiently * For complex operations, the number of instructions will be largerPipelining The origin of pipelining is thought to be in the early 1940s. The processor has specialised units for executing each stage in the instruction cycle. The instructions are performed concurrently. It is like an assembly line. IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | Time Steps (clocks) Pipelining is used to accelerate the speed of the processor by overlapping various stages in the instruction cycle. It improves the instruction execution bandwidt h. Each instruction takes 5 clock cycles to complete.When pipelining is used, the first instruction takes 5 clock cycles, but the next instructions finish 1 clock cycle after the previous one. Types of Pipelining There are various types of pipelining. These include Arithmetic pipeline, Instruction pipeline, superpipelining, superscaling and vector processing Arithmetic pipeline: Used to deal with scientific problems like floating point operations and fixed point multiplications. There are different segments or sub operations for these operations. These can be performed concurrently leading to faster execution.Instruction pipeline: This is the general pipelining, which have been explained before. — Pipeline Hazards Data Dependency: When two or more instructions attempt to share the same data resource. When an instruction is trying to access or edit data which is being modified by another instruction. There are three types of data dependency: RAW: Read After Write – This happens when instruction ij reads before instruction ii writes the data. This means that the value read is too old. WAR: Write After Read – This happens when instruction ij writes before instruction ii reads the data.This means that the value read is too new. WAW: Write After Write – This happens when instruction ij writes before instruction ii writes the data. This means that a wrong value is stored. Solutions Data Dependency: * Stall the pipeline – This means that a data dependency is predicted and the consequent instructions are not allowed to enter the pipeline. There is a need for special hardware to predict the data dependency. Also a time delay is caused * Flush the pipeline – This means that when a data dependency occurs, all other instructions are removed from the pipeline. This also causes a time delay. Delayed load – Insertion of No Operation Instructions in between data dependent instructions. This is done by the compiler and it avoids data dependency Clock Cycle| 1| 2| 3| 4| 5| 6| 1. Load R1| IF| OE| OS| | | | 2. Load R2| | IF| OE| OS| | | 3. Add R1 + R2| | | IF| OE| OS| | 4. Store R3| | | | IF| OE| OS| Clock Cycle| 1| 2| 3| 4| 5| 6| 7| 1. Load R1| IF| OE| OS| | | | | 2. Load R2| | IF| OE| OS| | | | 3. NOP| | | IF| OE| OS| | | 4. Add R1 + R2| | | | IF| OE| OS| | 5. Store R3| | | | | IF| OE| OS| Branch Dependency: this happens when one instruction in the pipeline branches into another instruction.Since the instructions have already entered the pipeline, when a branch occurs this means that a branch penalty occurs. Solutions Branch Dependency 1. Branch prediction: A branch to an instruction to an instruction and its outcome is predicted and instructions are pipelined accordingly 2. Branch target buffer: 3. Delayed Branch: The compiler predicts branch dependencies and rearranges the code in such a way that this branch dependency is avoided. No operation instructions can also be used. No operation instructions 1. LO AD MEM[100] R1 2. INCREMENT R2 3. ADD R3 R3 + R4 4. SUB R6 R6-R5 . BRA X Clock Cycle| 1| 2| 3| 4| 5| 6| 7| 8| 9| 1. Load| IF| OE| OS| | | | | | | 2. Increment| | IF| OE| OS| | | | | | 3. Add| | | IF| OE| OS| | | | | 4. Subtract| | | | IF| OE| OS| | | | 5. Branch to X| | | | | IF| OE| OS| | | 6. Next instructions| | | | | | | IF| OE| OS| Clock Cycle| 1| 2| 3| 4| 5| 6| 7| 8| 9| 1. Load| IF| OE| OS| | | | | | | 2. Increment| | IF| OE| OS| | | | | | 3. Add| | | IF| OE| OS| | | | | 4. Subtract| | | | IF| OE| OS| | | | 5. Branch to X| | | | | IF| OE| OS| | | 6. NOP| | | | | | IF| OE| OS| | 7. Instructions in X| | | | | | | IF| OE| OS| Adding NOP InstructionsClock Cycle| 1| 2| 3| 4| 5| 6| 7| 8| 1. Load| IF| OE| OS| | | | | | 2. Increment| | IF| OE| OS| | | | | 3. Branch to X| | | IF| OE| OS| | | | 4. Add| | | | IF| OE| OS| | | 5. Subtract| | | | | IF| OE| OS| | 6. Instructions in X| | | | | | IF| OE| OS| Re arranging the instructions Intel Pentium 4 processors have 20 stage pipelines. Toda y, most of these circuits can be found embedded inside most micro-processors. Superscaling: It is a form of parallelism combined with pipelining. It has a redundant execution unit which provides for the parallelism. Superscalar: 1984 Star Technologies – Roger ChenIF| ID| OF| OE| OS| | | | | | IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | | | | IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | | | | IF| ID| OF| OE| OS| | | | | | | IF| ID| OF| OE| OS| | | | | | IF| ID| OF| OE| OS| | | Superpipelining: It is the implementation of longer pipelines that is pipelines with more stages. It is mainly useful when some stages in the pipeline take longer than the others. The longest stage determines the clock cycle. So if these long stages can be broken down into smaller stages, then the clock cycle time can be reduced.This reduces time wasted, which will be significant if a number of instructions are performed. Superpipelining is simple because it does not need any addit ional hardware like for superscaling. There will be more side effects for superpipelining since the number of stages in the pipeline is increased. There will be a longer delay caused when there is a data or branch dependency. Vector Processing: Vector Processors: 1970s Vector Processors pipeline the data also not just the instructions. For example, if many numbers need to be added together like adding 10 pairs of numbers, in a normal processor, each pair will be added at a time.This means the same sequence of instruction fetching and decoding will have to be carried out 10 times. But in vector processing, since the data is also pipelined, the instruction fetch and decode will only occur once and the 10 pairs of numbers (operands) will be fetched altogether. Thus the time to process the instructions are reduced significantly. C(1:10) = A(1:10) + B(1:10) They are mainly used in specialised applications like long range weather forecasting, artificial intelligence systems, image process ing etc.Analysing the performance limitations of the rather conventional CISC style architectures of the period, it was discovered very quickly that operations on vectors and matrices were one of the most demanding CPU bound numerical computational problems faced. RISC Pipelining: RISC has simple instructions. This simplicity is utilised to reduce the number of stages in the instruction pipeline. For example the Instruction Decode is not necessary because the encoding in RISC architecture is simple. Operands are all stored in the registers hence there is no need to fetch them from the memory.This reduces the number of stages further. Therefore, for pipelining with RISC architecture, the stages in the pipeline are instruction fetch, operand execute and operand store. Because the instructions are of fixed length, each stage in the RISC pipeline can be executed in one clock cycle. Questions 1. Is vector processing a type of pipelining 2. RISC and pipelining The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers) architecture. Multiplying Two Numbers in MemoryOn the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let's say we want to find the product of two numbers – one stored in location 2:3 and another stored in location 5:2 – and then store the product back in the location 2:3. The CISC ApproachThe primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this p articular task, a CISC processor would come prepared with a specific instruction (we'll call it â€Å"MULT†). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction: MULT 2:3, 5:2MULT is what is known as a â€Å"complex instruction. † It operates directly on the computer's memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let â€Å"a† represent the value of 2:3 and â€Å"b† represent the value of 5:2, then this command is identical to the C statement â€Å"a = a * b. † One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly.Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware. The RISC Approach RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the â€Å"MULT† command described above could be divided into three separate commands: â€Å"LOAD,† which moves data from the memory bank to a register, â€Å"PROD,† which finds the product of two operands located within the registers, and â€Å"STORE,† which moves data from a register to the memory banks.In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly: LOAD A, 2:3 LOAD B, 5:2 PROD A, B STORE 2:3, A At first, this may seem like a much less efficient way of completing the operation. Because there are more lines of code, more RAM is needed to store the assembly level instr uctions. The compiler must also perform more work to convert a high-level language statement into code of this form. CISC | RISC | Emphasis on hardware | Emphasis on software | Includes multi-clock complex instructions | Single-clock, educed instruction only | Memory-to-memory: â€Å"LOAD† and â€Å"STORE† incorporated in instructions | Register to register: â€Å"LOAD† and â€Å"STORE† are independent instructions | Small code sizes, high cycles per second | Low cycles per second, large code sizes | Transistors used for storing complex instructions | Spends more transistors on memory registers | However, the RISC strategy also brings some very important advantages. Because each instruction requires only one clock cycle to execute, the entire program will execute in approximately the same amount of time as the multi-cycle â€Å"MULT† command.These RISC â€Å"reduced instructions† require less transistors of hardware space than the complex in structions, leaving more room for general purpose registers. Because all of the instructions execute in a uniform amount of time (i. e. one clock), pipelining is possible. Separating the â€Å"LOAD† and â€Å"STORE† instructions actually reduces the amount of work that the computer must perform. After a CISC-style â€Å"MULT† command is executed, the processor automatically erases the registers. If one of the operands needs to be used for another computation, the processor must re-load the data from the memory bank into a register.In RISC, the operand will remain in the register until another value is loaded in its place. The Performance Equation The following equation is commonly used for expressing a computer's performance ability: The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions p er program. RISC Roadblocks Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support.Although Apple's Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3. 1 and Windows 95 were designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive. Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors.Although RISC chips might surpass Intel's efforts in specific areas, the differences were not great enough to persuade buyers to change technologies. The Overall RISC Advantag e Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

Thursday, January 2, 2020

Architecture Classical Greek vs. Medieval Gothic Essays

Architecture: Classical Greek vs. Medieval Gothic Wendy DeLisio HUM_266 September 24, 2012 Taniya Hossain Architecture: Classical Greek vs. Medieval Gothic Looking at the design of different structures throughout the world, one may not realize the beauty of the art in each of them or the ideals on which they were constructed. For example the classical Greek era, 480 BCE – 330 BCE that held the ideals of order, balance, and God like perfection. This type of idealist architecture is seen in the Parthenon temple built in 447-432 BCE (Ancient-Greece.org, 2012). The temple is built in tribute for the Goddess Athena, Goddess of war and wisdom. It is a†¦show more content†¦Classic Greek architecture is made of stone resting on stone with nothing but pressure holding them together. This is best exemplified in Greek temples, such as the Parthenon. The Parthenon is a post and lintel structure, built of lime stone and marble which were the common building materials of that age ( Sporre, 2010). Using these types of materials limited the architect’s use of space. In order for the building to stand without the roof collapsing many columns were needed to hold the roof up. These columns, known as Doric columns because of their style, were made of marble and the pressure of the stone roof resting on them held them together. The Parthenon was with many beautiful states, from the metopes that are a series of carved panels forming the Doric frieze telling stories of the history and battles of the Gods, to the towering statue of the Goddess Athena for which it was built. The Parthenon and other Greek temples were meant to be revered from the outside as a center piece of the city, a monument to the Gods of that age. Gothic architecture, unlike classic Greek, used stone masonry. By using stone masonry they were able to create arches and redistributed the pressure of the stones enabling the structures to be built taller. They also created what is called a buttr ess and used this to hold up walls and arches asShow MoreRelatedSolution to Ignou Papers2652 Words   |  11 Pages==== 2. Does the post – Industrial society differ from the Industrial society? Explain 20 Solution: Yes the Post – Industrial society is differing from the Industrial society because of the following reason: * Limited production (i.e. artisanship vs. mass production) * Primarily an agricultural economy * Limited division of labor. In pre-industrial societies, production was relatively simple and the number of specialized crafts was limited. * Limited variation of social classes * Parochialism—SocialRead MoreArt History7818 Words   |  32 Pagesabout 20 years o Neolithic Period Ââ€" New Stone Age #61607; Begins around 9,000BC #61607; Neolithic Revolution • Agriculture o Allows people luxury of staying in one place; stability and performance o Cornerstone of civilization • Domestic Architecture o Wigwam, Huts, Lean-tos o Native American Indians were considered Neolithic • Refined tools o Spears, Bows and Arrows • Domesticated Animals o Hallmark of luxury, stability, and permanence • Pottery Ââ€" clay art o Bowls and containers Read MoreEssay about Summary of History of Graphic Design by Meggs14945 Words   |  60 Pagesvillage culture were the ownership of property and the specialization of trades. - Egyptians used hieroglyphics. - The Rosetta Stone, which was created in 196 or 197 BC, contains writing in Egyptian Hieroglyphics, Egyptian Demotic Script, and Greek. The major deciphering of the stone was done by Jean-Francois Champollion. - As hieroglyphics presented more opportunities than cuneiform, the language was used for commercial documents, poetry, myths, etc†¦ - Papyrus paper was a major step forward