Featured Post

Case Assignment: Disney the Happiest Brand on Earth

REPORT 1 CASE ASSIGNMENT: Disney The Happiest Brand on Earth In 2006, Disney’s Pixar discharged the hit film Cars, which earned $462 ...

Monday, September 30, 2019

Enron Scandal with Code of Ethics

Arthur Andersen Limited Liability Partnership was one of the â€Å"Big Five† accounting firm who providing auditing, tax and consulting services to large corporations. This is an accounting firm that held by reputation and trust by public and investor but it facing bankruptcy in the end. Early of the 20th century, invertors don’t know who can be trust because there was spread of business scandals. All they need was auditors. Andersen establishes a company to be trusted. He and his auditors will check and certificate the company accounts if the accounts were honestly and accurately. Andersen advises his partners to pay attention on public needs rather than the profit of company make. After 1950s, the company force to commercialize for the coming of Information Technology era but it still keeps its reputation. But to win the best customers, they must attract a new generation of employees. The new director was keen to explore more customers. They audit and certificate more accounts and made larger profits for their company. Andersen guaranteed the accounts for dishonest company from John DeLorean to Enron and WorldCom. The code of ethics which Andersen against are: 1. Standard I (A) Knowledge of the Law. Members and Candidates must understand and comply with all applicable laws, rules, and regulations (including the CFA Institute Code of Ethics and Standards of Professional Conduct) of any government, regulatory organization, licensing agency, or professional association governing their professional activities. In the event of conflict, Members and Candidates must comply with the more strict law, rule, or regulation. Members and Candidates must not knowingly participate or assist in and must dissociate from any violation of such laws, rules, or regulations. A case of John DeLorean, the founder of the DeLorean Motor Company who goes Ireland to builds his dream car. British government desperate to create job and gave 80 million pound. It was Andersen responsible for the money he spends. But DeLorean spend the money not belong to him. He will spend the company fund s of purchase of equipment for decorating his houses. He orders two Mercedes Benz, one send to his wife use in California. DeLorean asks to pay 17 million dollar to his Swiss bank account call GDP. In conclusion, DeLorean do nothing with the dream car. Although the Code and standard do not required that members and candidates report violations to their governmental or regulatory organizations but strongly encourages members to report violations. Andersen should report DeLorean who spend the money which are not related to the development of dream car and the money was not belonging to him. They shouldn’t participate in illegal activities such help DeLorean to cover the fact and follow the rules and regulations. . Standard I (B) Independence and Objectivity. Members and Candidates must use reasonable care and judgment to achieve and maintain independence and objectivity in their professional activities. Members and Candidates must not offer, solicit, or accept any gift, benefit, compensation, or consideration that reasonably could be expected to compromise their own or another’s independence and objectivity. In case of WorldCom, who was one of the company who use false account to earning profit by deceive public. After the scandal, WorldCom was the largest bankruptcy in U. S. history. It led to a domino effect of accounting and like corporate scandals that continue to tarnish American business practices and the foundation of economic. Thousand hundred of investors lost their life saving. Arthur Andersen emphasizes professional view and implements independence and objectivity which were refusing to certificate dishonest accounts. New generation of employees who work in Andersen should follow the thinking and view of the establisher. Although they want to expand their business, but they should work independence to report or refuse to certificate dishonest accounts, not helping them to deceive public. . Standard I (D) Members and Candidates must not engage in any professional conduct involving dishonesty, fraud, or deceit or commit any act that reflects adversely on their professional reputation, integrity, or competence. Arthur Andersen has involved in dishonesty act which was making false accounting. The company has loss its reputation once the scandal burst out. I n a nutshell, all company should follow rules and regulation. Otherwise, it may harm thousands even millions of people lost their life saving or in debt.

Sunday, September 29, 2019

Philips VS Matsushita Case Study

N.V. Philips (Netherlands) and Matsushita Electronic (Japan) had followed very different strategies and emerged with very new and different organizational capabilities. Philips built its success on a worldwide portfolio of responsive national organizations while Matsushita based its global competitiveness on its centralized, highly efficient operations in Japan. During 1990s, both company faced major challenge to their competitive positions and organizational model, and at the end of the decade, both companies were struggling to reestablish their competitiveness. At the start of the new millennium, new CEOs at both companies were implementing yet another round of strategic initiatives and organizational restructuring.Observers wondered how the changes would affect their long-running competitive battle. The name Philips has become more popular but the company we know as Panasonic nowadays is the brand name of Matsushita. Philips was the producer of only light-bulbs. They became the le ader in industrial research. After dividing Product Division and National Organization, they innovated new products (e.g. color TV, Stereo TV, TVs with teletext). But throughout the times of business, Philips continued profitless progress. However, throughout three decades, seven chairmen experimented with recognizing the company to deal with its growing problems.After 1990s Philips started overcoming the profitless progressing problem by cutting its cost through decentralizing its production in different part of the world (e.g. digital audio tape and electric-shaver product lines were relocated in Japan). But after 30 years quest Philips recognized that building efficiency in global operation has failed. On the other hand Konosuke Matsushita, a 23 years old inspector started his business with Osaka Electric Light Company, started production of double ended socket.The company Matsushita grew rapidly and expanded into battery powered lamps, electric irons and radios. On the 14th anni versary of Matsushita, KM announced to his 162 employees a 250 year corporate plane broke into 25 years section, each to be carried out by successive generations. His plan was codified in company creed and in the â€Å"Seven spirit of Matsushita†. CreedThrough our industrial activities, we strive to foster progress, to promote the general welfare of society, and to devote ourselves to furthering the development of world culture. Seven Spirits of MatsushitaService through Industry Fairness Harmony and Cooperation Struggle for Progress Courtesy and Humility Adjustment and Assimilation GratitudeKey Findings of this Case: Philips: Starting its business with one product focus.Organization development through separation of National Organization and Product Division. 7 chairman’s change within 3 decades in different attempt of recognition. Production diversification and shutting down 75 production facilities for cost cutting in 1987. During 1990s Operation Centurion reduced h eadcount around 22% of the company employees. In 2001 Gerard’s decision of outsourcing the products that can’t add value. Matsushita:Starting Business in 1918 as a double ended socket producer. On the 14th anniversary of Matsushita, KM announced to his 162 employees a 250 year corporate plane broke into 25 years section, each to be carried out by successive generations Advancing with a flood of new products, around 5000 electronic products. Became the first Japanese company to adopt the divisional structure, giving each division clearly defined profit responsibility for its product. Having a clear and specific target for the future growth of the company and each division has to .pay their 60% profit to the parent company. Building Global Leadership through VCRs in 1980s.KM changed the plan of controlling. Instead of controlling input, he started to monitor the output. Wherever the location is, there will be a manager  from the headquarter. That’s how they mana ge the relationship between the Headquarter and Subsidiary.Suggestions:According to our point of view Philips should have specification on the national organizations and the production division. Each should have been given a target that would judge their performance. Instead of just minimizing and diversifying production, they should have recruited young blood. Because young blood brings innovation in business. Considering their excessive cost of production, they should have outsource as much as possible to minimize cost and maximize profits of their company. Most importantly all the divisions should have been monitored by the headquarter, so that the performance and the cost effectiveness would be more emphasized.

Saturday, September 28, 2019

Earlier 19th century fashion styles and their cultural and historical Essay

Earlier 19th century fashion styles and their cultural and historical significance at the time - Essay Example As Lynch and Strauss point out, it becomes evident from history that the concept of beauty is not set by women but by the mainstream society and also that the mainstream society redefines the same from time to time. In other words, it can be said that the evolution in society and social thought is well expressed in fashion too (12). To begin with, in the ancient times, the most important factor of consideration while selecting a partner was health. While men had to engage in hunting in order to support their families, women had to be able to meet birth needs. That means, in the ancient times when survival was of utmost concern and the sick had little chance of survival, large muscles made a man beautiful and wide hips and large breasts made a woman attractive. As Hyland states, until the social economic development of Greece during the fifth century B.C, there was no clear concept of beauty. However, as painting and sculpture developed, beauty was attributed to certain essential features (45). To illustrate, Plato considered beauty as a result of symmetry and harmony which creates a golden proportion. The ideal face had to have a width which is two-thirds of its length. In addition to this attraction towards symmetry, in the Greek and Roman culture, one can see an affinity towards blond hair. However, one can see that during the Middle Ages, woman had to face a lot of hardships in the name of fashion. To illustrate, in Europe, the period saw woman as predators who posed a carnal challenge, and this situation was created mainly by religion. So, women were restrained from even wearing jewelry, and this restraint almost solely came from clerics. As a result, married women had to conceal their hair in order to avoid arousing desires in others; though virgins were allowed to expose their hair. However, blond hair was something to be frightened as it directly meant an invitation to

Friday, September 27, 2019

CONSIDERING THE EXPERIENCE OF INTERPROFESSIONAL COLLABORATION IN YOUR Essay

CONSIDERING THE EXPERIENCE OF INTERPROFESSIONAL COLLABORATION IN YOUR AREA OF CLINICAL EXPERIENCE - Essay Example Each individual contributes from within the limits of her/his scope of practice† (Canadian Physiotherapy Association 2009). It refers to situation when a number of professionals work with one another to enhance cooperation and the quality of care (Pungo n.d.). The collaborative process has also been defined as a dynamic process which requires that professional boundaries be surpassed if each participant is to contribute to developments in patient care while appropriately bearing in mind the qualities and skills of the other professionals (Canadian Physiotherapy Association 2009). Inter-professional collaboration is a process for communication and decision-making that encourages active involvement of each and every dimension in patient care and expands patient and family focused objectives and values. It allows for flexible and synchronized services and a capable and receptive workforce. Mutual understanding and group effort builds up effective multidisciplinary teams. This perm its professionals to work beyond the limitations of traditionally ascribed roles,  and facilitates efficient role substitution. This offers healthcare professionals with the imperative support of skilled workforce, for example nurse practitioners, pharmacists, etc. Similarly, inter-professional collaboration illustrates the interactions among individual professionals who might stand for a certain discipline or branch of knowledge, but who additionally bring their exceptional educational backgrounds, experiences, principles, responsibilities, and uniqueness to the process. It deals with phenomena of mutual respect, maximum utilization of resources, and understanding of individual responsibilities, and competence and skills within respective disciplines. It entails trust, communication, respect and fairness behind the professional relationship where different healthcare professionals work together to offer the best possible care to their patient (Martin et al. 2010). The phenomenon of inter-professional collaboration to enhance health outcomes is not novel; it has been and continues to be the foundation of the healthcare system. Public health collaborations comprise of not only the certified professionals but also systems of communities, government agencies, nonprofit organizations, and private sector groups to deal with multifaceted health outcome (Zaccagnini & White, K 2010, p. 240). Previous research illustrates that collaboration entails common acknowledgement, consideration and respect for complementary roles, skills, and abilities of the inter-professional team (Zaccagnini & White 2010, p. 238). Effective collaborative partnerships support quality and lucrative care through a planned process that permits members to trade important knowledge and thoughts and later participate in a process of mutual decision making (Zaccagnini & White, K 2010, p. 238). The Institute of Medicine’s (OIM) 2001 report focuses on inter-professional collaboration and stre sses the need for care givers and institutions to actively cooperate with each other, trade information, and make provisions for care coordination because the needs of any persons or population are outside the expertise of any solitary health profession. Accrediting and regulatory agencies identify inter-professional collaboration as a necessary part of the avoidance of medical mistakes. It aims at enhancing communication and teamwork among care givers, personnel, and patients as ways

Thursday, September 26, 2019

Project Management Essay Example | Topics and Well Written Essays - 500 words - 1

Project Management - Essay Example The trade show materials have to be shipped in time to reach before the trade show, the vendor/printed has to deliver in time so that they can be shipped in time. The timing for training the staff is vital as Pat will be away on vacation; travel arrangements are essential to avoid last minute complications as shows of this magnitude attract participants from all over the world. Not all, but many activities are dependent on the previous activity, which adds importance to adhering to time schedules. Hence, time management would involve defining and sequencing the activity, estimate the duration and the resources, and develop the schedule (ITPM, n.d.). The second most important knowledge area recognized by the PMBOK is the human resources management. Every knowledge area includes a planning area (ITPM). Most effective use of the people involved in the project has to be done. The first task is to identify the staff and assign roles, like Pat and Terry have been identified, taking into account their strengths and weaknesses. The project roles, responsibilities and reporting relationships have to be assigned (Duncan, n.d.). Staff acquisition is essential after which the team development needs to be executed. For any project to be a success teamwork is essential, hence acquiring, developing and managing the project team is essential. Effort and expertise of different individuals are necessary to execute the project effectively. Hence, this knowledge area also involves delegating, motivating, coaching and mentoring. The third most important knowledge area relevant to the trade show project is communication. This involves timely and effe ctive generation, dissemination, storage and disposition of the project information (Duncan). Since most activities are independent, communication at all levels gains importance. Everyone involved in the project must be prepared to receive and send communication in the language understood by all.Communication planning

Wednesday, September 25, 2019

Japan's Nationalism Essay Example | Topics and Well Written Essays - 1250 words

Japan's Nationalism - Essay Example Broadly speaking, the Japanese citizens have developed skepticism towards the opinions they initially harbored regarding the political form of their nation and its present cultural nature. This skepticism is the core of my paper which seeks to find the causes of Japanese distrust and disloyalty against their government; and the ways through which the Japanese citizenry can increase their happiness and loyalty to their government. The country is found wanting when compared to Denmark which was ranked the leading country in happiness in the world. Japan has the second largest free market economy in the world. Its mainstay is rooted in trade on an international level, and the less prominent economic areas of agriculture, service delivery with a proficient array of industrial technicians, investors, and industrial developers; and distribution of commodities. Japan is minimally adored with natural resources that can be exchanged for foreign exchange but this imbalance is offset by the high volumes of trade conducted within and without Japan. Though Japan has been ranked as the second world largest free market economy, the economic growth has been falling since the early 1990s rated at 1% per annum, which was quite low when compared to the 4% per annum economic growth experienced in the 1980s. Though Japan experienced a period of recovery in the early years of the 2000s, the economic growth has fallen reflecting the global economic trends. The nation plummeted into a recession in the year 2008 that was prompted by a global decline for the demands of its products (Storry 1957 pp35-36). This stagnation in the growth of the economy and the eventual recession did not pass unnoticed by the Japanese citizens as it had a direct impact on them. Increase in the global prices of crude oil sent a significant ripple in price increase of the household items. These items attracted greater prices in the market thus the Japanese citizenry had to pay higher for the products than they did initially. This trend has left them feeling the pinch and they feel uncomfortable with the price increases. At the same time, demand for the Japanese products within the international market has fallen thus fetching less foreign exchange. This has also had the effect of loss of jobs for some of the laborers in Japan thus causing struggle in livelihoods as they attempt to make the household ends meet. The increased rates of joblessness have been another cause of discomforts for the Japanese people. This discomfort has resulted into the feeling of unhappiness that the Japanese are experiencing (Wilson 2002). Exportation of the Japanese products has also been affected by the slowing down of both the United States and Chinese economies. The Japanese market has for a long time relied on the two above mentioned nations for marketing their products. The slow growth of the United States and the Chinese economies reflects on the Japanese market with a decreased demand for the Japanese products. This in return has led to the decline of the Japanese revenue that is in turn reflected in poor provision of national services by the Japanese government. This has led to the people criticizing the government for its failure to effectively deal with the situation. This means that the Japanese are not happy with the way their government is dealing with the

Tuesday, September 24, 2019

Working with children and families case study Essay

Working with children and families case study - Essay Example The different aspects of intervention in the families are historical, social, psychological and legal in nature. These aspects are incorporated in the guidelines that are being followed by the social workers to resolve the case specific problems of the clients (Hepworth et. al., 2009; The historical aspect is one of the facets in the implementation of family intervention. This aspect is important to be able to determine the possible causes of the situation wherein intervention is required as well as the background information for assessing the needs of the person. Included in the historical aspect are the family members in the household and the relationship to the person in need of assistance. Another factor included in the said aspect is the experiences of the parents and the events that had happened in the family that may have triggered the problematic situation. The interactions and interrelationship between family members that may have directly or indirectly affect the perspective and disposition of the person in focus. In addition, the strengths and the weaknesses of the parents that can trigger the changes in behaviour and way of thinking of the person are also included in the fundamental historical components of the family intervention (Horwath, 2000, p.56 , 80-82). For the child Debbie, the members of her immediate family are in the household, her parents and her siblings. Historical aspect of the intervention includes the relationship between the family members and their effect on the child. Her mother Irene is agoraphobic and her brother is severely disabled. Other issues make matters worse such as the situation of the father John who is not employed. The conflicts within the household are having effects on Debbie and Hannah who are exhibiting violent behaviours and problems in learning. The social aspect of the intervention in the family is another area of importance. This can be

Monday, September 23, 2019

The cost of capital Essay Example | Topics and Well Written Essays - 750 words

The cost of capital - Essay Example The formula determines the appropriate expected return of alternative projects. The cost of capital is the amount that the investor has to pay in order to generate a series of future dividend incomes, return of investments (Sheridian, Martin & Keown, 2010). For example, Geoff Black (2010) reiterated the business earns $1,000,000 in one year. The profits will grow by 2 percent per year, and the company generates a net worth of $16,666,667 after two years. The cost of capital is arrived at as follows:After computing the formula, X is equal to 8 percent cost of capital figure. Further, the cost of capital can include the return that the stock market investors are expected to earn from their investments in a company. The firm that generates revenues more than the amount of cost of capital will entice the company’s current and prospective investors to invest additional funds into the invested company’s coffers.For example, Microsoft generated a 53 percent return on its equit y. The company’s equity is $7.2 billion. Computing, the company’s return on equity is $3.8 Billion. If the company’s cost of capital is 14 percent, the company’s cost of capital is $1.0 Billion. The company’s residual income is $2.8 Billion. ($3.8 Billion - $ 1.0 Billion). Another term for the $2.8 Billion net income is residual income or Economic Value Added (EVA). Under the Economic Value Added (EVA), management is faced with the financial question whether the assets can be better used in other areas or by fresh management.

Sunday, September 22, 2019

Principles of finance Essay Example | Topics and Well Written Essays - 1500 words

Principles of finance - Essay Example In this case, the debtor is the companies in question. In most cases, this is termed as assets granted, particularly by the creditor to the debtor. The debtor agrees to repay the debt with an interest. Some companies use debt as part of their strategy in corporate finance. Before the debt is issued, both parties have to agree on the standard of deferred payment. In most cases, this repayment is in the mode of currency (Blum 2006). However, this repayment can be in the form of goods and services. Payment can be paid in installments or in the whole amount at the end of a loan agreement. A company offers different kinds of debts to customers to finance its operations. There are secured and unsecured debts, depending on whether the creditors have recourse to the assets of the borrower or not. In addition, there are private or public loans depending on the parties involved. One of the main reason why companies tend not to issue as much debt as possible is the fear of becoming bankrupt. If a company issues more debt than its stipulated capital, then the possibility of bankruptcy is usually high. This is especially in unsecured debts, and the borrower happens to forfeit payment. If this happens with a considerable number of borrowers, then the company can be at an extreme risk (DePamphilis 2011). Therefore, these companies offer debts amounting to the given budget. The financial advisors of the company advise the top managers on the considerable amount of debts to issue that would not alter the normal functioning of the company in any way. Secondly, a company may not be in a position to offer as much debt as possible. This is because the company may be undergoing some harsh economic times. Therefore, the company’s initial capital might be limited to offering a given amount of debt. During this period, some companies may not offer any debt at all. Therefore, the amount of debts a company offers is often guided by the economic situations of the company particular ly the capital in place (Forsythyl 2009). In addition, most of the risks involved may deter a company from issuing as many debts as possible. The companies, with the help of their financial advisers, look into all the risks in all the risks involved before issuing the debts. These risks may be as a result of economic downtowns, variability in the interest rates experienced and changes in the conditions of the market. Some companies tend to take the risks but obviously at a minimum (Prattie 2011). Fewer companies are willing to take many risks, therefore, tending to issue a limited amount of debts as possible. Moreover, some of these companies tend to put in place a lot of terms and conditions required before one gains access to these loans. Therefore, some debtors tend to bark out of the lending process due to all these requirements. Some of the requirement of a company before issuance of debts is collateral mostly in the form of assets. The debtor may not possess the required colla teral and, therefore, may not be legible to qualify for a debt from a certain company in question. In addition, the interest rates required by the company may be too high for the debtor not forgetting the question of having to follow the covenant made in the process. More to this is that this debt has to be repaid. Therefore, the investor or debtor in question has to have a stable cash flow to be in a position to repay in the stipulated time (Black 2010). Therefore, the appetite in making investment decisions is reduced. As a result, fewer debtors would be in a position to take the risk because a few of them have a stable cash flow. They may, therefore, fear the consequences that follow a forfeited debt payment therefore reducing the amount

Saturday, September 21, 2019

Real Time Pcr Essay Example for Free

Real Time Pcr Essay PROBE-BASED DETECTION SYSTEMS14 Hybridization probes (also called FRET probes)16 MELTING CURVE ANALYSIS16 Multiplex real-time PCR18 APPLICATIONS OF REAL TIME PCR18 GENE EXPRESSION ANALYSIS18 SNP GENOTYPING19 HIV DETECTION19 CYSTIC FIBROSIS (CF) DETECTION:20 THE ADVANTAGES OF REAL-TIME PCR20 THE DISADVANTAGES21 REFRENCES21 REAL TIME PCR TRADITIONAL PCR The polymerase chain reaction (PCR) is one of the most powerful technologies in molecular biology. Using PCR, specific sequences within a DNA or cDNA template can be copied, or â€Å"amplified†, many thousand- to a millionfold. In traditional (endpoint) PCR, detection and quantitation of the amplified sequence are performed at the end of the reaction after the last PCR cycle, and involve post-PCR analysis such as gel electrophoresis and image analysis. REAL-TIME QUANTITATIVE PCR (qPCR) In real-time quantitative PCR (qPCR), the amount of PCR product is measured at each cycle. This ability to monitor the reaction during its exponential phase enables users to determine the initial amount of target with great precision. WHAT’S WRONG WITH AGAROSE GELS? * Poor precision. * Low sensitivity. Short dynamic range lt; 2 logs. * Low resolution. * Non-automated. * Size-based discrimination only * Ethidium bromide staining is not very quantitative REAL TIME PCR VS PCR . BASIC PRINCIPLE Quantitative PCR  is carried out in a  thermal cycler  with the capacity to illuminate each sample with a beam of light of a specified wavelength and detect the fluorescence emitted by the excited  fluorochrome. The thermal cycler is also able to rapidly heat and chill samples thereby taking advantage of the physicochemical properties of the  nucleic acids  and  DNA polymerase. The PCR process generally consists of a series of temperature changes that are repeated 25 – 40 times, these cycles normally consist of three stages: the first, at around 95  Ã‚ °C, allows the separation of the nucleic acid’s double chain; the second, at a temperature of around 50-60  Ã‚ °C, allows the alignment of the primers with the DNA template;  the third at between 68 72  Ã‚ °C, facilitates the  polymerization  carried out by the DNA polymerase In real-time PCR, * the amount of DNA is measured after each cycle by the use of fluorescent markers that are incorporated into the PCR product. The increase in fluorescent signal is directly proportional to the number of PCR product molecules (amplicons) generated in the exponential phase of the reaction. * Fluorescent reporters used include double-stranded DNA (dsDNA)- binding dyes, or dye molecules attached to PCR primers or probes that are incorporated into the product during amplification. * The change in fluorescence over the course of the reaction is measured by an instrument that combines thermal cycling with scanning capability. By plotting fluorescence against the cycle number, the real-time PCR instrument generates an amplification plot that represents the accumulation of product over the duration of the entire PCR reaction (Figure 1). Figure 1—Amplification plots are created when the fluorescent signal from each sample is plotted against cycle number; therefore, amplification plots represent the accumulation of product over the duration of the real-time PCR experiment. The samples being amplified in this example are a dilution series of the template. TYPES OF PCR Quantitative PCR| Qualitative qPCR| A specific or non-specific detection chemistry allows the quantification ofthe amplified product. | In qualitative qPCR, the goal is to detect the presence or absence of a certain sequence. | The amount detected at a certain point of the run is directly related to theinitial amount of target in the sample| For virus sub-typing and bacterial species identification. Can also be used for allelic discrimination between wild type and mutant, between different SNPs or between different splicing forms. | common pplications of quantitative PCR are gene expression analysis, pathogen detection/quantification and microRNA quantification| Different fluorophores can be used for the two alleles, and the ratio of the fluorophores signals correlates to the related amount of one form compared to the other one. | Quantitative PCR software uses the exponential phase of PCR for quantification. | Specific detection methods such as Double-Dye probe systems are more ofte n used for theseApplications| Overview of real-time PCR Real-time PCR is a variation of the standard PCR technique used to quantify DNA or RNA in a sample. Using sequence-specific primers, the relative number of copies of a particular DNA or RNA sequence can be determined.. Quantification of amplified product is obtained using fluorescent probes or fluorescent DNA binding dyes and real time PCR instruments that measure fluorescence while performing temperature changes needed for the PCR cycles. qPCR STEPS There are three major steps that make up a qPCR reaction. Reactions are generally run for 40 cycles. 1. Denaturation—The temperature should be appropriate to the polymerase chosen (usually 95 °C). The denaturation time can be increased if template GC content is high. 2. Annealing—Use appropriate temperatures based on the calculated melting temperature (Tm) of the primers (5 °C below the Tm of the primer). 3. Extension—At 70–72 °C, the activity of the DNA polymerase is optimal, and primer extension occurs at rates of up to 100 bases per second. When an amplicon in qPCR is small, this step is often combined with the annealing step using 60 °C as the temperature. BASICS OF REAL TIME PCR Baseline – The baseline phase contains all the amplification that is below the level of detection of the real time instrument. Threshold – where the threshold and the amplification plot intersect defines CT. Can be set manually/automatically CT – (cycle threshold) the cycle number where the fluorescence passes the threshold Rn – (Rn-baseline) NTC – no template control Rn is plotted against cycle numbers to produce the amplification curves and gives the CT value. ONE-STEP OR TWO-STEP REACTION qRT-PCR can be one step or two step. 1. Two-step qRT-PCR Two-step qRT-PCR starts with the reverse transcription of either total RNA or poly(A)+ RNA into cDNA using a reverse transcriptase (RT). This first-strand cDNA synthesis reaction can be primed using random hexamers, oligo(dT), or gene-specific primers (GSPs). To give an equal representation of all targets in real-time PCR applications and to avoid the 3? bias of oligo(dT), it is usually recommended that random hexamers or a mixture of oligo(dT) and random hexamers are used. The temperature used for cDNA synthesis depends on the RT enzyme chosen. Following the first-strand synthesis reaction, the cDNA is transferred to a separate tube for the qPCR reaction. In general, only 10% of the first strand reaction is used for each qPCR. . One-step qRT-PCR One-step qRT-PCR combines the first-strand cDNA synthesis reaction and qPCR reaction in the same tube, simplifying reaction setup and reducing the possibility of contamination. Gene-specifi c primers (GSP) are required. This is because using oligo(dT) or random primers will generate nonspecific products in the one-step procedure and reduce the amount of product of interest. O verview of qPCR and qRT-PCR components This section provides an overview of the major reaction components and parameters involved in real-time PCR experiments. * DNA polymerase One of the main factors affecting PCR specificity is the fact that Taq DNA polymerase has residual activity at low temperatures. Primers can anneal nonspecifically to DNA, allowing the polymerase to synthesize nonspecific product. The problem of nonspecific products resulting from mispriming can be minimized by using a â€Å"hot-start† enzyme. Using a hot-start enzyme ensures that no active Taq is present during reaction setup and the initial DNA denaturation step. * Template Anywhere from 10 to 1,000 copies of template nucleic acid should be used for each real-time PCR reaction. This is equivalent to approximately 100 pg to 1 ? of genomic DNA, or cDNA, generated from 1 pg to 100 ng of total RNA. Excess template may increase the amount of contaminants and reduce efficiency. If the template is RNA, care should be taken to reduce the chance of genomic DNA contamination. One option is to treat the template with DNaseI. Ultrapure, intact RNA is essential for full-length, high-quality cDNA synthesis and accurate mRNA quantification. RNA should be devoid of any RNase contamination, and aseptic conditions should be maintained. * Reverse transcriptase The reverse transcriptase (RT) is as critical to the success of qRT-PCR as the DNA polymerase. It is important to choose an RT that not only provides high yields of full-length cDNA but also has good activity at high temperatures. High-temperature performance is also very important for tackling RNA with secondary structure or when working with gene-specific primers (GSPs). * dNTPs It is recommended that both the dNTPs and the Taq DNA polymerase be purchased from the same vendor, as it is not uncommon to see shifts of one full threshold cycle (Ct) in experiments that employ these items from separate vendors. * Magnesium concentration In qPCR, magnesium chloride or magnesium sulfate is typically used at a fi nal concentration of 3 mM. This concentration works well for most targets; however, the optimal magnesium concentration may vary between 3 and 6 mM. * UNG The Uracil-N-Glycosylase is an enzyme that hydrolyses all single-stranded and double-stranded DNA containing dUTPs. Consequently, if all PCR amplifications are performed in the presence of a dNTPs/dUTPs blend, by carrying a UNG step before every run it is possible to get rid of any previous PCR product. * ROX Some thermocyclers require MasterMix containing ROX dye for normalization. This is the case for the ABI and Eppendorf machines, and optional on the Stratagene machines. If you work with such machines, it is easier to work with the ROX dye already incorporated in the MasterMix rather than adding it manually. It guarantees a higher level of reproducibility and homogeneity of your assays. * Fluorescein For iCycler iQ, My iQ and iQ5 machines (BioRad thermocyclers), the normalization method for SYBR Green assay uses Fluorescein to create a â€Å"virtual background†. As in the case for the ROX, it is better and easier to use a MasterMix that contains pre-diluted Fluorescein, guaranteeing higher reproducibility and homogeneity of your assays. REAL TIME PCR SYSTEM: System Features: †¢ Four interchangeable block formats †¢ Optional Automation Accessory amp; Barcode Scanner †¢ Argon ion laser/CCD camera †¢ Easy to Use Software, Multiple Applications †¢ Set up Wizards †¢ QC Filtering/Flag System †¢ Flexible data reports amp; exporting SOFTWARES FOR DATA ANALYSIS AND PRIMER DESIGNING 1 ) Light Cycler ® Relative Quantification Software The first commercially available software was the Light Cycler ® Relative Quantification Software (2001). 2 ) REST In 2002, the relative expression software tool (REST ) was established as a new tool. 3 ) Q-Gene Recently a second software tool, Q-Gene, was developed, which is able to perform a statistical test of the real-time data. Q-Gene manages and expedites the planning, performance and evaluation of quantitative real-time PCR experiments. 4) OligoPerfect A primer design software program such as OligoPerfectâ„ ¢, available on the Web at www. invitrogen. com/oligoperfect, can automatically evaluate a target sequence and design primers for it based on the criteria STEPS OF REAL TIME PCR Real-time reaction mix (final concentrations): 1x 2 x AmpliTaq Gold 0. 5 ? M 5’ primer 0. 5 ? M 3’ primer 0. 2 ? M probe 0. 4 ? Rox reference dye 20 ? l Final Volume (including sample and dH20) STANDARD REAL-TIME PCR PROTOCOL ASSAY DESIGN: This section describes the stages of real-time PCR assay design and implementation. We will identify sources of variability, the role they play in data accuracy, and guidelines for optimization in the following areas: 1Target amplicon and primer design 2. Nucleic acid purification 3. Reverse transcription 4. Controls and normalization 5. Standard curve evaluation of efficiency, sensitivity, and reproducibility Good primer (pair) properties One way to minimize efficiency bias is to amplify relatively short targets. Amplifying a 100 bp region is more likely to result in complete synthesis in a given cycle than, say, amplifying a 1,200 bp target. For this reason, real-time PCR target lengths are generally in the range of 60 bp to 200 bp. In addition, shorter amplicons act as a buff er against variations in template integrity. Primers designed to amplify larger regions are less likely to anneal with the same fragment in a slightly degraded nucleic acid sample. PURIFICATION Phenol-based organic extraction is a very effective method for purifying RNA from a wide variety of cell and tissue types. During sample lysis, phenol and guanidine isothiocyanate disrupt cells and dissolve cell components. while maintaining the integrity of the nucleic acids by protecting them from RNases. Chloroform is added and the mixture is separated by centrifugation, which separates the solution into an aqueous phase and an organic phase. RNA remains exclusively in the aqueous phase in the presence of guanidine isothiocyanate, while DNA and protein are driven into the organic phase and interphase. The RNA is then recovered from the aqueous phase by precipitation with isopropyl alcohol. REVERSE TRANSCRIPTION CONSIDERATIONS Most reverse transcriptases employed in qRT-PCR are derived from avian myeloblastosis virus (AMV) or Moloney murine leukemia virus (M-MLV). An ideal reverse transcriptase will exhibit the following attributes: * Thermostability— thermostable RTs function at the higher end of (or above) this range and allow for successful reverse transcription of GC-rich regions. * RNase H activity— RNase H activity can drastically reduce the yield and ratio of full-length cDNA, which translates to poor sensitivity. Several RTs, most notably SuperScript II and III, have been engineered for reduced RNase H activity. NORMALIZATION AND QUANTIFICATION: When analyzing and comparing results of Real-Time qPCR assays many researchers are confronted with several uncontrolled variables, which can lead to misinterpretation of the results. Those uncontrolled variables can be the amount of starting material, enzymatic efficiencies, and differences between tissues, individuals or experimental conditions. In order to make a good comparison, normalization can be used as a correction method, for these variables. The most commonly known and used ways of normalization are : * normalization to the original number of cells, * normalization to the total RNA mass, normalization to one or more housekeeping genes, * normalization to an internal or external calibrator. Normalization to number of cells can actually only be done for cell culture and blood samples. The two majors methods of normalization are the absolute quantification and the relative quantification . Absolute quantification Absolute quantification requires a standard curve of known copy numbers. The amplicon being studied can be cloned, or a synthetic oligonucleotide (RNA or DNA) can be used. The standard must be amplified using the same primers as the gene of interest and must amplify with the same efficiency. The standards must also be quantified accurately. This can be carried out by reading the absorbance at A260, although this does not distinguish between DNA and RNA, or by using a fluorescent ribonucleic acid stain such as RiboGreen. Relative quantification Relative quantification is the most widely used technique. Gene expression levels are calculated by the ratio between the amount of target gene and an endogenous reference gene, which is present in all samples. The reference gene has to be chosen so that its expression does not change under the experimental conditions or between different tissue. There are simple and more complex methods for relative quantification, depending on the PCR efficiency, and the number of reference genes used. STANDARD CURVE TO ASSESS EFFICIENCY, SENSITIVITY, AND REPRODUCIBILITY The final stage before assay employment is validating that all the experimental design parameters result in a highly efficient, sensitive, and reproducible experiment. * Reaction efficiency One hundred percent efficiency corresponds to a perfect doubling of template at every cycle, but the acceptable range is 90–110% for assay validation. This efficiency range corresponds to standard curve slopes of –3. 6 to –3. 1. The graph in Figure shows the measurement bias resulting solely from differences in reaction efficiency.. A standard curve is generated by plotting a dilution series of template against the Ct for each dilution. To some, sensitivity is measured by how early a target Ct appears in the amplification plot. However, the true gauge of sensitivity of an assay is whether a given low amount of template fits to the standard curve while maintaining a desirable efficiency. The most dilute sample that fits determines reaction sensitivity. The standard curve also includes an R2 value, which is a measure of replicate reproducibility. Standard curves may be repeated over time to assess whether the consistency, and therefore the data accuracy for the samples. Real-Time PCR Fluorescence Detection Systems Several different fluorescence detection technologies can be used for realtime PCR, and each has specific assay design requirements. All are based on the generation of a fluorescent signal that is proportional to the amount of PCR product formed. The three main fluorescence detection systems are: * DNA-binding agents (e. g. SYBR Green and SYBR GreenER technologies * Fluorescent primers (e. g. , LUX Fluorogenic Primers and Amplifluor qPCR primers) * Fluorescent probes (e. g. , TaqMan probes, Scorpions, Molecular Beacons) The detection method plays a critical role in the success of real-time PCR. DNA-Binding Dyes The most common system for detection of amplified DNA is the use of intercalating dyes that fluoresce when bound to dsDNA. SYBR Green I and SYBR GreenER technologies use this type of detection method. The fluorescence of DNA-binding dyes significantly increases when bound to double-stranded DNA (dsDNA). The intensity of the fluorescent signal depends on the amount of dsDNA that is present. As dsDNA accumulates, the dye generates a signal that is proportional to the DNA concentration and can be detected using real-time PCR instruments. SYBR Green I advantages †¢ Low cost assay †¢ Easy design and set up SYBR Green I disadvantages †¢ Non specific system †¢ Not adapted to multiplex †¢ Non suitable for qualitative qPCR Primer-Based Detection Systems Primer-based fluorescence detection technologies can provide highly sensitive and specific detection of DNA and RNA. In these systems, the fluorophores is attached to a target-specific PCR primer that increases in fluorescence when incorporated into the PCR product during amplification. * Amplifluor Real-Time PCR Primers Amplifluor real-time PCR primers are designed with both a fluorophore and quencher on the same primer. The primer adopts a hairpin configuration that brings the fluorophore in close proximity to the quencher. The fluorescent signal increases when the primer is unfolded and the fluorophore and quencher are de-coupled during incorporation into an amplification product. Figure: Ampliflour primer PROBE-BASED DETECTION SYSTEMS Probe-based systems provide highly sensitive and specifi c detection of DNA and RNA. However, dual-labeling and complex design specifi cations make them expensive and more diffi cult to use than primer-based systems or DNAbinding dyes. TaqMan probes = Double-Dye probes TaqMan probes, also called Double-Dye Oligonucleotides, Double-Dye Probes, or Dual Labelled probes, are the most widely used type of probes. A fluorophore is attached to the 5’ end of the probe and a quencher to the 3’ end. The fluorophores is excited by the machine and passes its energy, via FRET (Fluorescence Resonance Energy Transfer) to the quencher. TaqMan probes can be used for both quantification and mutation detection, and most designs appear to work well. TaqMan ASSAY DENATURATION ANNEALING OF PRIMERS AND PROBE POLYMERIZATION AND PROBE CLEAVAGE Molecular Beacons In addition to two sequence-specific primers, molecular beacon assays employ a sequence-specific, fluorescently labeled oligonucleotide probe called a molecular beacon, which is a dye-labeled oligonucleotide (25–40 nt) that forms a hairpin structure with a stem and a loop . A fluorescent reporter is attached to the 5 end of the molecular beacon and a quencher is attached to the 3 end. The loop is designed to hybridize specifically to a 15–30 nucleotide section of the target sequence Figure: Moleculer Beacon They are highly specific, can be used for multiplexing, and if the target sequence does not match the beacon sequence exactly, hybridization and fluorescence will not occur a desirable quality for allelic discrimination experiments. Hybridization probes (also called FRET probes) Roche has developed hybridization probes for use with their LightCycler. Two probes are designed to bind adjacent to one another on the amplicon. One has a 3’ label of FAM, whilst the other has a 5’ LC dye, LC red 640 or 705. When the probes are not bound to the target sequence, the fluorescent signal from the reporter dye is not detected. However, when the probes hybridize to the target sequence during the PCR annealing step, the close proximity of the two fluorophores allows energy transfer from the donor to the acceptor dye, resulting in a fluorescent signal that is detected. FRET probe principle and light cycler MELTING CURVE ANALYSIS Melting curve analysis can only be performed with real-time PCR detection technologies in which the fluorophore remains associated with the amplicon. Amplifications that have used SYBR Green I or SYBR GreenER dye primers can be subjected to melting curve analysis. Dual-labeled probe detection systems such as TaqMan probes are not compatible because they produce an irreversible change in signal by cleaving and releasing the fluorophore into solution during the PCR; however, the increased specificity of this method makes this less of a concern. The level of fluorescence of both SYBR Green I and SYBR GreenER dyes significantly increases upon binding to dsDNA. By monitoring the dsDNA as it melts, a decrease in fluorescence will be seen as soon as the DNA becomes single-stranded and the dye dissociates from the DNA. Figure: Melting curve analysis can detect the presence of nonspecifc products, as shown by the additional peaks to the left of the peak for the amplified product in the melt curve. How to perform melting curve analysis To perform melting curve analysis, the real-time PCR instrument can be programmed to include a melting profile immediately following the thermocycling protocol. After amplification is complete, the instrument will reheat your amplified products to give complete melting curve data. Most real-time PCR instrument platforms now incorporate this feature into their analysis packages. In general, the program steps will be: 1. Rapid heating of the amplified sample to 94 °C to denature the DNA. 2. Cooling the sample to 60 °C. 3. Slowly heating (by increasing the temperature 0. 2 °C/second) the sample while plotting fluorescence signal vs. temperature. (As the temperature increases and the dsDNA strands melt, the fluorescence signal will decrease. ) Figure: Example of a melting curve thermal profile setup on an Applied Biosystems instrument (rapid heating to 94 °C to denature the DNA, followed by cooling to 60 °C. ) Multiplex real-time PCR In multiplex real-time PCR, more than one set of gene-specific primers is used to amplify separate genes from the template DNA or RNA in a single tube. Typically, multiplex reactions are used to amplify a gene of interest and a â€Å"housekeeping† gene (e. g. , #-actin or GAPDH), which is used as a normalize for the reaction. Because more than one PCR product will be quantified in the same tube, different fluorescent reporter dyes are used to label the separate primers or probes for each gene. More Samples Analyzed per Plate. Target and normalizer in same reaction and Less sample consumed. APPLICATIONS OF REAL TIME PCR GENE EXPRESSION ANALYSIS A sample gene expression analysis using a multiplex TaqMan assay is presented in the following sections. In this example, we’re interested in the relative expression of three genes in the polyamine biosynthesis pathway, ornithine decarboxylase (ODC), ODC antizyme (OAZ), and antizyme inhibitor (AZI), in two different samples, sample A and sample B. 1. RNA was isolated from sample A and sample B. 2. RNA was reverse transcribed into cDNA. 3. The amount of the target genes (ODC, OAZ, and AZI) and the reference gene (b-actin) was determined in each of the cDNA samples using a multiplex qPCR assay. 4. Data were analyzed and the relative expression of each of the target genes in the two samples was calculated. EXAMPLE BRCA1 is a gene involved in tumor suppression. BRCA1 controls the expression of other genes. In order to monitor level of expression of BRCA1, real-time PCR is used. SNP GENOTYPING In order to perform SNP genotyping, two specific probes labeled with different dyes are used, the first for the wild type allele and the second for the mutant allele. If the assay results in the generation of only the first fluorescent color, then the individual is homozygous wild type at that locus. If the assay results in the generation of only the second fluorescent color, then the individual is homozygous mutant. And finally, if both fluorescent colors are produced, then the individual is heterozygous. At the end of the reaction, hydrolysis probes are digested. The quality of a hydrolysis probe is given by the hybridization efficiency, the quenching of the intact probe and the cleavage activity of Taq polymerase. HIV DETECTION Nowadays HIV is strikingly spreading out whole the world. so in order to diminish its distribution , it is necessary to detect it as soon as possible amp; for this purpose, Real time PCR is recommended by scientist. In this method ,’ pol’’ gen of the virus, is amplified in thermocycler. 6 patient have been studied. infection in these patients was confirmed by ELISA amp; western blot. * Sampling amp; RNA extracting from patients. * Cloning of target segment by using Xba I amp; Hind III. And 180 bp primers. * Standard virus mRNA was extracted. * Quantitative analysis of HIV virus by SYBR-green Real Time RT-PCR. CYSTIC FIBROSIS (CF) DETECTION: Cystic f ibrosis (CF) is the most common inherited disease among Caucasian populations with an incidence of ~1 in 2500 births. A3 base pair (bp) deletion, designated DF508, accounts for nearly 70% of CF cases and causes severe manifestations of the disease. It results in the absence of phenylalanine at position 508 of the cystic fibrosis transmembrane conductance regulator (CFTR) protein and this error prevents normal processing and translocation of the polypeptide chain to apical membranes of epithelial cells. This deletion can be detected by molecular beacons in real time PCR. Figure:Examples of specific molecular beacon fluorescence increase during real-time PCR in samples containing single lymphoblasts homozygous normal for CF (green), heterozygous DF508 (blue), or homozygous DF508 (red). A) Fluorescent signal from the molecular beacon detecting the normal allele. (B) Fluorescent signal from the molecular beacon detecting the DF508 allele. Dashed lines indicate the threshold of 200 units (~10 SD above baseline readings) used for determining CT values. THE ADVANTAGES OF REAL-TIME PCR * The ability to monitor the progress of the PCR reaction as it occurs in real time * The ability to precisely measure the amount of amplicon at each cy cle * An increased dynamic range of detection * The combination of amplification and detection in a single tube, which eliminates post-PCR manipulations. Rapid cycling times (1 hour) * High sample throughput (~200 samples/day) * Low contamination risk (sealed reactions) * Very sensitive (3pg or 1 genome eq of DNA) * Broad dynamic range (10 1010 copies) * Reproducible (CV lt; 2. 0 %) * Allows for quantitation of results * Software driven operation * No more expensive than â€Å"in house† PCR ($15/test) THE DISADVANTAGES * Current technology has limited capacity for multiplexing. Simultaneous detection of 2 targets is the limit. * Development of protocols needs high level of technical skill and/or support. Requires Ramp;D capacity and capital) * High capital equipment costs ($ 50,000 -160,000). REFRENCES * http://www. icmb. utexas. edu/core/DNA/qPCR/QiagenRT-PCR. pdf www. icmb. utexas. edu * http://books. google. com. pk/books? id=-v-U-mXWg-gCamp;printsec=frontcoveramp;dq=real+time+pcramp;hl=enamp;sa=Xamp;ei=Bph1UezKIceDhQeUh4CwCAamp;ved=0CDAQ6AEwAQ#v=onepageamp;q=real%20time%20pcramp;f=false books. google. com. pk * PCR/Real-Ti me PCR Protocols www. protocol-online. org Real-Time Pcr: An Essential Guide Google Books books. google. com * * http://www. gene-quantification. e/bio-rad-CFX96-bulletin-5589. pdf www. gene-quantification. de * https://www. google. com. pk/#output=searchamp;sclient=psy-abamp;q=fret+rt-qpcramp;oq=fret+in+rtamp;gs_l=serp. 1. 1. 0i22i30l2. 1583. 4622. 1. 10196. 6. 6. 0. 0. 0. 0. 551. 2584. 3-3j1j2. 6. 0 0. 0 1c. 1. 9. serp. 97Wjtm9UCU4amp;psj=1amp;bav=on. 2,or. r_cp. r_qf. amp;fp=f6d28cf5fd703914amp;biw=1366amp;bih=600 www. google. com. pk * BioTechniques Real-time PCR for mRNA quantitation www. biotechniques. com * http://env1. gist. ac. kr/joint_unugist/file/g_class11_real_time_pcr_vt. pdf env1. gist. ac. kr

Friday, September 20, 2019

Modelling and simulation using opnet modeller 14.5

Modelling and simulation using opnet modeller 14.5 4.1 Overview The aim of this chapter is to illustrate the modelling and simulation, using OPNET Modeller 14.5-Education version for the autonomic wireless network management. In addition, it will explain what kind of modifications and suppositions were necessary in order to achieve the autonomic self-healing mechanism, including agents architecture and description. 4.2 Autonomic Management Agents This section will illustrate the modelling and simulation, using OPNET Modeller 14.5-Education version, of a community of autonomic management agents that provide network fault analysis for a group of base stations. The main objective of these intelligent agents will be to bring together process information in order to detect failures when base stations exchange information between them, and the creation of a high obtainable wireless access network. Analysing network failures is relatively difficult since these problems may differ from one network system to another and could depend on network dynamics, i.e., the type of network information to be exchanged and the traffic characteristics associated with that information. In addition, the pattern of failures could vary quickly as the network operates and reconfigures around a failed device As OPNET Modeller 14.5-Education version does not have an autonomous process ready for simulation usage, the existing code had to be adapted to allow autonomic behaviour. The use of two different autonomic agents was required in order to provide self-healing network diagnosis and facilities. In this report, OPNET coding modifications will be called Agents and two different types are mentioned and applied to the access points. Testing Agents will supply data simplicity and monitor capabilities to Node Agents whereas Node Agents will periodically check the information that Testing Agents bring together and use it as a medium of failure detection within the wireless access network. In addition, a Testing Agent will be able to supervise and provide data regarding information exchanged among access points. Node agents use data obtained by the Testing Agents as a method of node analysis Various Testing Agents may be found on a single wireless client. A Testing Agent can be situated on a host device since it does not have to deal with data acquisition and information simplicity. In contrast, Node Agents will be located on a base station. Various Testing Agents may be found on a single wireless client 4.2.1 Additions and Model Modifications OPNET Modeller was used in order to determine concept achievability of the proposed model. The concept of Autonomic Mobile Wireless Networks is illustrated by using a community of wireless base stations which allow autonomous healing of interrupted paths The OPNET simulation showed in this report will contain two Node Agents and two Testing Agents which take the part of a group of autonomic base stations. The new OPNET topology required the creation of ten nodes in order to characterize every autonomic agent and all the modifications were made to accomplish the needs of both agents. The autonomic behaviour was obtained through modifications to the wlan_server_adv and ip_arp_v4 OPNET process models, where code changes were made in order to achieve the desired behaviour. Figure 4.0: OPNET ip_arp_v4 process model. 4.2.2 Testing Agent (TA) and Node Agent (NA) Description Each Testing Agent belongs respectively with a Node Agent as a single component of a particular node in the OPNET simulation. As mentioned in section 3.3, each base station is aware of its next-door stations at all times. A Testing agent (TA_1) is designed to watch and detect alterations regarding other base stations. In the event of any modification of the network, TA_1 will notify Node Agent (NA_1) by using a UDP message. UDP presents lack of reliability so consequently the Testing agent TA_1 cannot assure successful message transmission. However, this lack of reliability will be useful for simulation purposes After receiving information from TA_1, a Node Agent (NA_1) will inform other stations about changes in zone, and file updating may take place. When a NA_1 observes that information sent has not arrived at its destination within a particular period of time, the agent alerts its neighbours that a probable node malfunction has occurred. This time depends on certain attributes fixed for a particular mobile user. Scalability of the network will be achieved with the use of a second pair of agents. Agent TA_2 then has the job of monitoring path request messages sent and received by other stations. Information regarding path request is detected by TA_2, including the time when the path request was generated and the destination of this demand. Changes to the mobility architecture were necessary including ARP and IP alterations. The idea was to alter some settings in order to evaluate and compare the destination address with the address of the device where specific information was sent. The destination address must belong to a registered wireless client and the intelligent agents will check correct transmission of it IP alterations were made changing the moip_core to allow stations to be able to forward information packets to its neighbours, modify the IP routing mode and help each station choosing the better route available. The moip_core has a list that could be dynamically regulated as the base stations travel between networks The UDP is used as a transport protocol and the managing, mobility and registration information is handled by the process shown in the figure below. Figure 4.1: OPNET moip_reg process model The moip_reg process allows base stations to manage and update mobility information regarding next-door stations. When exchanging information among stations, all the agents will monitor and process each request and they will aim to find failures during the registration process. If the registration communication was successful, there is an identification value that is compared with a mobility list and the correctly matched among them will mean no error has occurred during the registration procedure Updated messages must be sent when agents have no information regarding the mobile station due to updating failures. In fact, agents need acknowledgments in order to be sure that the communication between stations is happening perfectly. If an agent does not receive the updating message, it will not be able to monitor base stations and all the information exchanged among agents will be lost. Therefore, all the updates and acknowledgments will be verified within an identification field contained by the moip_reg. If they are equivalent, the update will be set as confirmed and the exchange of information will be free of failures. Figure 4.2: OPNET agent node structure Figure 4.2 shows a plain representation of agent node structure and distribution. In addition, OPNET Modeller allows us to present the node model which was modified in order to provide autonomic behaviour to a set of autonomic base stations within a self-managed wireless access network. The wireless connectivity is achievable through the use of IEEE802.11b interfaces, permitting roaming among networks. This type of interface could be improved by adding an extra communication module between the radio transceiver and the wlan_mac system. This process allows a base station to simulate the effect of completely losing connection among devices and at the same time avoiding unnecessarily queues of packets 4.3 Network Model Three different network configurations were constructed to simulate and identify autonomic characteristics, and agent distribution was arbitrarily decided in order to improve the simulation. The Testing agent (TA_1) was applied to a single base station; another station was selected to make use of Testing Agent (TA_2) and Node Agent (NA_1) while Node Agent (NA_2) was modified to operate in all base stations 4.3.1 Design of Wireless Network Infrastructure The next steps were followed in order to design a wireless infrastructure in OPNET: Open the OPNET program and select New Project and then press OK. Give the project and the scenario a name. Select create empty scenario and press next. Network space was chosen as campus and specific size was selected as: X-span and Y-span 10 kilometres respectively. The Object Palette Tree will open which illustrates the various WLAN devices as follows. The file Node Models situated in the object palette contains the item wireless-lan-adv which encloses all the different network devices used in the wireless network presented in this report All nodes were modified by using the function configuration Application Config and Advance Edit Attributes option. In addition, the following wireless parameters were customized as stated in the figure 4.4: Physical characteristics Data Rate (bps) Transmit Power (W) AP Beacon Interval (sec) Packet Reception-Power Threshold (bytes) The wireless access network contains ten base stations (Figure 4.5) which are connected via point to point duplex links (ppp_adv). Each base station has at least two interfaces; one interface to provide connectivity among wireless mobile devices and another wired interface for uplink communication. The network configuration showed above was created in order to simulate and analyse the wireless system when it includes nodes (Base Stations) on the exterior sector of the network and no more than two neighbour stations close to it. Therefore, these stations will only have a maximum of two paths to communicate their next-door devices. On the other hand, the rest of base stations will be surrounded by more stations and more possible routes. Figure 4.6 shows the second configuration. There are various potential routes on which base stations and mobile devices may exchange information among them. As a result, the agents performance is going to be probed by selecting the best path and being able to repair route problems. The third model illustrated in the Figure 4.7 offers a more narrowly linked network configuration. The number of neighbours for every node will increase and the communication between Node Agents and Testing Agents will improve due to a decrement in the number of paths required for Testing Agent information to meet the suitable Node Agent. Therefore, a superior self-healing performance will be expected using this configuration. 4.4 Verification of Agents self-healing process upon base station malfunction To experiment the right operation of the agents, different simulations were made in every network model. The main purpose is to test agent reliability and its competence when providing an intelligent self-healing course of action. Consequently, the base stations were programmed to reproduce a failure and the action of agents would eventually lead us to simulate an autonomic behaviour. In order to obtain a more understandable vision of the self-healing performance, a reduced network configuration was simulated (Figure 4.8). Exchange of information among nodes may take different paths until data arrives at its final destination. In the event that a particular base station fails, the permanent monitoring service of the Node Agents will detect the malfunction, and then the base stations self-healing method will autonomously locate another route allowing intelligent diagnosing and repairing OPNET code modifications provide one method of simulating a malfunction in the base station. The most important features required for this process were the use of an acknowledged mechanism and the understanding of the range capacity of base stations. These characteristics were required to allow mobile devices to recognize when a failure takes place in a base station and stop transmitting and routing traffic, in order to start self-healing and path recovery 4.5 Self-Healing and Route Discovery The new route discovery was obtained through modifications to the wlan_server_adv and ip_arp_v4 OPNET process model, where code changes were made in order to achieve the desired autonomic behaviour. In a wireless access network, if the base station and mobile nodes are within transmission range of each other, an ARP request can be use in order to find a new route to the target mobile node. The Internets Address Resolution Protocol dynamically translates IP addresses to its MAC level address. Full OPNET source code is given in Appendix Source Code page 70 //ROUTE FAILURE //Route failure was created by denying connection service for a given destination address. The program looks into ARP table entries to find an entry for the destination IP address. Because the IP address given does not match in the ARP table the program returns a FAILURE ROUTE situation. On the other hand, in case that a matching entry was found SUCCESS connection will take place. static Compcode arp_cache_entry_find (IpT_Address dest_ip_addr, int* index_ptr) { Int table_size; inti; IpT_Arp_Entry*entry_ptr; //Find the entry in the ARP cache for a given destination IP address. table_size = op_prg_list_size (arp_cache_lptr); for (i = 0; i { entry_ptr = (IpT_Arp_Entry *) op_prg_list_access (arp_cache_lptr, i); FRET (OPC_COMPCODE_FAILURE) } //Match the to-be-resolved destination IP address with the entrys IP address if (ip_address_equal (dest_ip_addr, entry_ptr->ip_addr) == OPC_TRUE) { *index_ptr = i; FRET (OPC_COMPCODE_SUCCESS) } } When a new route is discovered (SUCCESS connection case), the information needs to be sent to an explicit destination (Mobile node) as specified in the â€Å"Destination Address† attribute. If the destination address specified is correct it generates a destination and forward the appl_packet to the MAC layer with that information. if (destination_address == OMSC_AA_AUTO_ASSIGN) { curr_dest_addr = OMSC_AA_AUTO_ASSIGN; oms_aa_dest_addr_get_core (oms_aa_handle, integer_mac_address, (int) mac_address); curr_dest_addr = integer_mac_address; } else { //Destination Address attribute. curr_dest_addr = destination_address; } // Set this information in the interface control / information to be sent to the MAC layer op_ici_attr_set_int64 (wlan_mac_req_iciptr, dest_addr, curr_dest_addr); // Install the control information and send it to the MAC layer op_ici_install (wlan_mac_req_iciptr); op_pk_send (pkptr, outstrm_to_mac); send_paket = op_ici_create_fmt (appl_packet); sendID = (SPkt *) op_prg_mem_alloc ( sizeof (SPkt) ); } In order to make the code modifications a simple as possible, the new path discovery was made through a simple Request-Response communication between base station and mobile node. Transmission of selected configuration parameters from the base station to the mobile node is possible by the creation of the autonomic agents and their interaction. The agents configuration is also executed in the OPNET Modeller Simulation by the use of the Node Editor described in the Figure 4.9.