Thursday, October 31, 2019

Write a critical commentary on the following document about the Lille Literature review

Write a critical commentary on the following document about the Lille to Paris Hunger March 18 November- 2 December 1933 - Literature review Example This study will therefore assess and help in noting down a viable commentary about The Lille to Paris Hunger March. The document seeks to expound on the problematic factors that contributed to this march. Mass unemployment pops up as the main reason contributing to the rise of the hunger march. It is also the main reason that sparked other protests in France, protests that occurred at different times in history, before and after the Lille to Paris Hunger March. This commentary is therefore posed to highlight on the pre and post-march periods, provide the main arguments and facts with evidence, and finally draw a conclusion of the important points that are pointed out in the document. The author of this document had in mind the urge to bring forth the intended message of history as regards the famous hunger march. The languages used are French and English though the former is used sparingly in the document (Perry, 2007). The document (Lille to Paris Hunger March) was written with the intention of providing information to Newcastle University students and other persons with the urge to know more about historical labour events in France and specifically, the 1933 Hunger march. The document is structured in such a way that one is able to highlight the sequence of events that took place prior to and after the famous 1933 hunger march. It is an explanatory kind of document where the causes, the actual hunger march event, and the repercussions of the march are clearly highlighted for easier understanding of what really transpired during that period in the history of France. In between the two world wars, France experienced four recessions that contributed heavily to a sharp increase in unemployment. The origins of this march lie with the success of the Saint-Nazaire to Nantes march that took place that same year (June 27-28 1933) and organized by the local CGT (Confà ©dà ©ration Gà ©nà ©rale du Travail [General Confederation of Labour]),and

Tuesday, October 29, 2019

Linguistic and Literary Issues in A Tale of Two Cities Essay

Linguistic and Literary Issues in A Tale of Two Cities - Essay Example This paper is a discussion of the Linguistic and Literary Issues in â€Å"A Tale of Two Cities†.A major characteristic of â€Å"A Tale of Two Cities† by Charles Dickens is the tightly unified  subplots of the novel. Throughout the novel, the novelist has been careful in adroitly interlinking the different subplots. The novel mainly deals with two parallel plots: the love relation between Charles Darnay and Lucy Manette and the historical events connected with French Revolution. However, there are several other underlying subplots distributed throughout the three Books of the novel. They include the story of the great sacrifice by the good-for-nothing lawyer Sydney Carton, the comparison between the two cities of London and Paris, the atrocities of the aristocrats etc along with the stories within story such as the imprisonment of Dr. Manette, the story of Madame Defarge. The overall setting of the novel is based on these interconnected subplots which contribute to the each other as well as to the meaning of the novel in general. The novelist has been effective in presenting the major themes of the novel through the literary device of setting. It means that the setting of the novel which incorporates the interrelated subplots functions as a literary device to the novelist in his ultimate conveyance of the major themes. Therefore, the subplots in â€Å"A Tale of Two Cities† work in relation to each other to reveal the major themes of the novel.  

Sunday, October 27, 2019

Health Promotion: Post Natal Depression

Health Promotion: Post Natal Depression The issues of health and health promotion initiatives have gained much significance in the recent past. Stephens (2008:5) comments that from a social perspective health are understood as much more than a matter for individual experience and responsibility; health behaviour is seen in terms of relationship with others and health is structured by society. World HealthOrganisation(WHO) constitution of 1948, defines health as a state of complete, physical, social and mental well-being, and not merely the absence of disease or infirmity. It also adds that health has been considered less as an abstract state and more as a means to an end which can be expressed in functional terms as a resource which permits people to lead an individually, socially and economically productive life, with respect to health promotion. (WHO, Geneva, 1986). WHO recognizes the spiritual dimension of health and regards health as a fundamental human right and states that the basic resources for health should be acc essible for all people. According to WHO, health promotion represents a comprehensive social and political process, which not only embraces actions aimed at strengthening the skills and capabilities of individuals, but also acts toward changing social, environmental and economic conditions so as to alleviate their impact on public and individual health. Its also the process of enabling people to increase control over the determinants of health and thereby improve their health (WHO official website) The concept of social determinants of health needs to be considered while discussing health and health promotion initiatives. According to a study conducted by Bambra et.al(2008), the wider social determinants of health were listed out as water and sanitation, agriculture and food, health and social care services, unemployment and welfare, working conditions, housing and community, education and transport. The term health promotion has variously been used to refer to a social movement, an ideology, a discipline, a strategy, a profession, and a strategy or field of practice delineated by commitment to key values(Keith and Tones, 2010).According to ODonnell (2009), health promotion is the art and science of discovering the synergies between their core passions and optimal health enhancing their motivation to strive for optimal health and supporting them in changing their life style to move toward a state of optimal health, which is a dynamic balance of emotional, social, spiritual and intellectual health. Tones and Tilford (2009) is of the opinion that health promotion as a quasi-political movement and professional activity can be described as militant wing of public health. At a general level health promotion has come to represent a unifying concept for those who recognize the need for change in the ways and conditions of living to promote health (Fleming and Parker, 2006). Post natal depression Postnatal depression is one of the most discussed topics in health today. This assignment discusses postnatal depression in detail, considering the significance it has and the risks associated with Postnatal depression, among the women in the United Kingdom. There has been a growing international recognition of postnatal depression as a major public health concern (Oates et.al, 2004). The government policy (Department of Health, 2004) recognises that the mental disorders during pregnancy and the post natal period can have serious consequences for individual women, their partners, babies and other children. Perinatal psychiatric disorder is one of the leading causes of maternal morbidity and is the leading cause of maternal mortality in the UK (Confidential Enquiries into Maternal Deaths, 2001).NICE (2007) observes that the mental disorders which occur during pregnancy and the postnatal period can seriously affect the health and wellbeing of a mother and her baby, as well as for her p artner and other family members. This condition is a form of maternal morbidity that affects about one in eight women from diverse cultures and is a leading cause of maternal mortality(Dennis, 2009).Dennis(2009) also comments that postnatal depression can also have serious consequences for the health and well being of the family as the infants and children are particularly vulnerable to it impaired maternal-infant interactions can have an impact on the cognitive, social ,emotional and behavioural development of the children. According to the latest reports it is estimated that approximately 75000 women within the United Kingdom are affected by postnatal depression (Hanley and Hanley, 2009).Craig (2008) comments that postnatal depression has been various defined as non-psychotic depression occurring during the first six months, the first four weeks and the first three months post partum; but recently three months postpartum was suggested in the United Kingdom. There have been many views by various authors about postnatal depression. Wheatley (2006) comments that postnatal depression affects between 10 and 20 percent of women who have had babies, and it causes distress at a time when there is every reason for happiness. Wheatley (2006) adds that the symptoms vary from person to person as for some symptoms can be mild and for other women, it can lead to serious consequences including bouts of depression. However, the case of postnatal depression which is serious enough to warrant treatment percentage is bet ween 7% and 35%. Dalton and Holton (2001) defines that postnatal depression is one of the symptoms of a serious mental condition known as postnatal illness. They opine that postnatal illness covers a range of afflictions which range from sadness to infanticide which start after child birth. The disorders associated with postnatal illness are blues, postnatal depression, puerperal psychosis and infanticide or homicide. Dalton and Holton (2001, p.3) defines postnatal depression as the first occurrence of psychiatric symptoms severe enough to require medical help occurring after childbirth and before the return of menstruation. They add that it does not include the blues, and excludes the condition of those who have previously sought psychiatric help because of other psychiatric illnesses such as schizophrenia, manic depression, depression or drug abuse. Feeney (2001) is of the view that although the symptom of postnatal depression is dysphoric (depressed) mood, this state is also acco mpanied by other symptoms like extreme fatigue, strong feelings of guilt, disturbance of sleep and loss of appetite. Hanzak (2005) attributes the occurrence of postnatal depression to three factors; biological, psychological and social causes. She lists out some of the possible reasons for postnatal depression as history of disturbed early life, loss of own mother, current marital or family conflicts, infertility and investigations for four or more years , loss of a previous pregnancy, adoption or fostering, high medical anxieties over the pregnancy, admission to hospital for longer than one week over the last three months of pregnancy ,major upheavals or stress over the last three months, emergency Caesarean section, neonatal illnesses, hormonal changes and personal or family history of depression Walsh (2009) comments that the occurrence of postnatal depression is linked with birth experience. Parker (2009) had earlier opined that if the birth was traumatic, there are high chances for postnatal depression. Epidemiological factors of poverty, social class and low income influence the chances of postnatal depression (Gale and Harlow, 2003). Walsh (2009) puts forward a view that postnatal depression can affect fathers and children and hence its important to maintain communication and interaction between family members. Cox and Holden (2001) are of the opinion that the consequences of maternal depression are costly not only on a personal level, but also in terms of money and personnel level as well. They put forward an interesting point that when there is contact between professionals and mothers is high detection of postnatal depression is very low and that the failure to diagnose depression may be attributed to short appointments, a physical orientation of care and an emphasis on the babys rather than the mothers well being. Most cases of postnatal depression can be dealt with at primary care level with monitoring by the family doctor and interventions by primary care staff (Cox and Holden, 2001). Health promotion models and approaches Dahlgren and Whitehead (1991) had proposed that the factors which influence health are multidimensional and suggests a model which illustrates the wider determinants of health. The main factors according to them are general socioeconomic, cultural and environmental conditions, living and working conditions, social and community influences, individual lifestyle factors, age, sex and hereditary factors. The model depicts individuals as central characters, who are influenced by various other determinants, which play a major role in influencing their health factors. Source: Dahlgren and Whitehead (1991) Another model which is widely discussed with relation to health promotion is the stages of change model. Bunton et.al(2000) proposes that the transtheoretical or stages of change model has greatly influenced health promotion practices in the United States of America, Australia and the United Kingdom since the late 1980s.The stages of change model was focused on encouraging change for people with addictive behaviour. People go through several stages when trying to change behaviour (Naidoo and Wills, 2000). Fertman (2010) asserts that behaviour change occurs in stages and that a person moving through these stages in a very specific sequence constitutes the change. According to this model, there are five stages of change, which are listed as pre contemplation, contemplation, preparing for change, making the change and maintenance. The health belief important model is a well known theoretical model, which emphasises the role of beliefs in decision making. This model which was proposed by Rosenstock(1966) and modified by Brecker(1974) proposes that whether or not people change their behaviour will be influenced by an evaluation of its feasibilities and the comparisons of its benefits weighted against the costs. Evans et.al (2005) comments that the major three health promotion approaches are the behaviour change approach, the self-empowerment approach and the collective action or community development approach. They add that these approaches have different goals and adopt different ways to achieve their goals and propose different criteria for their evaluation, though they have a common aim to promote good health and to prevent the effects of ill health. Each of these approaches has a unique understanding of the origins of health and health behaviour and subsequently of their objectives in health promotion and these three approaches are mutually complimentary. (Victorian Health Promotion Foundation, 2004). NICE (2007) defines behaviour change as the product of individual or collective human actions, seen within and influenced by their structural, social and economic context. Resnicow and Waughan (2006) comment that the study of health behaviour change has historically been rooted in a cognitive-rational paradigm. The models such as social cognitive theory, the health belief model, the transtheoretical model have viewed behavioral change as an interaction of factors such as knowledge, attitude, belief etc (Rimer and Lewis, 2002; Baranowski et.al, 2003).It has been suggested by the evidences that behavior change occurs in stages or steps and that movement through these stages is cyclical involving a pattern of adoption, maintenance, relapse and readoption over time. It has been suggested by the evidences that behavior change occurs in stages or steps and that movement through these stages is cyclical involving a pattern of adoption, maintenance, relapse and re adoption over time(Di paitro and Hughes, 2003).. According to NICE (2007) the attempts to promote or support behaviour change take a number of forms, which are activities which can be delivered at a number of levels, ranging from local, one to one interactions with individuals to national campaigns. NICE(2007) divides interventions into four main categories as policy-such as legislation ,education or communication-such as one to one advice, group teaching or media campaigns, technologies-such as the use of seat belts, breathalyses , resources-such as leisure centre entry, free condoms or free nicotine replacement therapy. According to the Victorian Health foundation (2004) the behavioural approach focuses on implementing interventions to change or remove behavioural health risk factors. Interventions from this perspective are targeted at a particular behavioural risk factor associated with a particular negative health outcome, and they target a population performing the behavioural risk factor and endeavour to promote health through various strategies. However, Craig et.al (2008) adds that behaviour change interventions are generally complex to design, deliver and evaluate.Michie (2008) states that more investment in developing the scientific methods for behavioural change studies is essential. Behavioural science is relevant to all phases of the process of implementing evidence-based health care; development of evidence through the primary studies, synthesizing the findings in systematic reviews, translation of evidence into guidelines and practice recommendations and implementing these recommendatio ns in practice(Michie,2008). Dunn et.al (2006) proposes that Item Response modeling (IRM) can be used to improve the psychometric methods in health education and health behaviour research and practice. They add that IRM is already being adopted to improve and revise quality of life questionnaires. However Masse et.al (2006) comments that a number of issues seem to stunt the application of IRM methods, as they list out the following issues (i) Lack of IRM applications in the context of health education and health behavior research; (ii) lack of awareness as to what IRM can do beyond assessing the psychometric properties of a scale; (iii) lack of trained psychometricians trained in our field. It is to be noted that the behaviour change approach came under criticism from various quarters. The major criticisms pointed out by Marks et.al(2005) were the inabilities to target the major socio-economic causes of ill health, possible incompatibilities of the top-down recommendations with community norms, values and practices, the assumption of a direct link between knowledge attitudes and behaviour and the assumption of homogeneity among the receivers of health promotion messages. Post natal depression-Current significance and ethical considerations Post natal depression is a matter of serious concern in the current age, as many women are being affected by it. Almond (2010) comment that post natal depression can be deemed a public health problem as the effects of it are known to go beyond the mother and it also affects the partner and the child. He adds that it can lead to infanticide as well as maternal death by suicide and according to evidences, all countries are faced with the challenge of postnatal depression, and the most affected countries are the low and middle income countries. The NICE guidelines for the clinical management of antenatal and postnatal mental health (2007) have observed the risks associated with postnatal depression. Ramchandani (2005) concurs to it and observes that the postnatal depression in fathers can have long-term consequences for the development of their child, on behavioural and emotional aspects. A study entitled The children of the 90s by Bristol University in 2008, had found that post natal d epression in fathers can have long lasting psychological effects on their children. A notable observation in this study was that the boys born to depressed fathers are twice as likely as other boys to have chances of developing behavioural problems by the age of three and a half. It is essential to look into the long term consequences posed by the problem. Ramchandani (2008) points out that the conduct problems at the age of three to four years are strongly predictive of serious conduct problems in the future, increased criminality and significantly increased societal costs. The quotes by Ramchandani points out the threats posed by the depression among the fathers of new born babies. The impact of postnatal depression can be highly detrimental to a society, as proved by the recent unfortunate happening of a depressed teacher killing her baby in Exeter, as a result of the depression. Policy drivers There have been lots of developments over the last few years in policy on the mental health and womens services (NICE, 2007). NSF for Child Health and Maternity was published in 2004 and is a 10 year programme that is aimed at the long term and sustained improvement in childrens health. Setting standards for health and social services for children, young people and pregnant women, the NSF aims to ensure fair, high quality and integrated health and social care from pregnancy to adulthood (NHS, 2007). NICE(2007) lists out the four main strands of policy relevant to antenatal and postnatal mental health as National service frameworks(NSFs), (particularly the mental health NSF,NSF for children young people and maternity services),policy to ensure equal access to responsive mental health services( especially services that meet needs of women, people from minority ethnic groups), public health policy and policy on commissioning and delivering health care and social care services in the com munity and the policy concerned with strategies for improving mental health services. The screening for postnatal depression is highly talked about in the field of psychology and medicine today. Currie and Radematcher (2004) argues that pediatric providers are aware of the prevalence of postnatal depression and its effect on new born babies. However, there have been arguments for and against screening for postnatal depression and hence the practitioners should consider them carefully (Coyne et.al, 2000). The view proposed by Chauldron et.al (2007) is that from the legal and ethical standpoints and the perspective of feasibility, the benefits of screening outweigh the risks. However, they add that, the implementation must be seen as an iterative process, and implementing the screening for post natal depression in a systematic and comprehensive approach is critical to the ultimate well-being of children and families.Basten (2009) proposes that more studies in the field of psychotherapeutic research and psychological areas are required. This is in conformance with the ob servation by De Tychey ,Briancon et.al, 2008) that the diagnostic techniques need to be improved for both caregivers and sufferers through education and the communication should be promoted, focusing on the fostering of parenting skills as a preventive measure against Post Natal Depression.(de Tychey, Brianà §on et al. 2008). One of the recent studies by Norman et.al (2010) has found out that exercise can help women in combating postnatal depression and that the specialised routines could help new mothers decrease the chances of depression by upto 50 percent. Partnership working Partnership working is a very important term in the current health and social care system in the United Kingdom. Partnership working can be defined as a system where two or more disciples work collaboratively to deliver optimal care to an individual (NHS, 2007). In the context of post natal depression partnership working refers to working in partnership with the team involved in the mother and the newborn baby, which includes pediatricians and obstetricians(Byrom et.al,2009) .Douglas(2008) points out that partnership working is recognised as the most effective way of improving social care services. Department of Health (2006) had stressed that the action to improve health and care services will be underpinned through working in partnerships between individuals, communities, business, voluntary organizations, public services and government.Butt(2008) argues that partnerships have international appeal as a means to integrate health and social services in response to the realisation tha t both sectors serve populations whose complex needs cannot be met adequately through segmented approaches. Partnership working with women having mental health problems can be a challenging task (Department of health, 2008). According to NICE (2007) the impact of partnership working is a function of a number of features of joint working and it is possible to categorise partnerships along a number of descriptive variables such as membership, structures, leadership, agendas and organisational cultures. Previous studies have shown that the working of people involved in the care of women with post natal depression, a trusting partnership can be developed between carers, patients and professionals, which will be beneficial to all. Feeney et.al (2001) had proposed that working in partnerships with families is an essential component of effective programming in the early developmental stages of children. Hence partnership working holds a very important role in the post natal period as, it would be able to relieve the emotional stress which many women go through. It was observed by NICE (2007) that developing trust and accommodating relationships within facilitating partnerships is imperative to the attainment of partnership goals, and issues of process are highly important building blocks to success.Sorin (2002) comments that there are many reasons to establish partnerships and asserts that the family is the most significant influence on the mothers post natal health as well as the childs development and well being. Sorin( 2002) adds that partnerships that develop to address fear and other emotions can work towards understanding appropriate expression of these emotions, which include learning words to describe the emotion using forms like music ,talking to others . A report on safety in maternity services published by Kings fund (2008) emphasises the significance of team work and collaboration in ensuring the safety of mothers and babies and points out that effective team work can increase safety, whereas poor teamwork can be detrimental to the safety. The report proposes several solutions to resolve the difficulties in team work. The main suggestions include ensuring clarity about the objectives of the team and roles and that there is effective leadership among the group and clarity in procedures for communication is present (Byrom et.al, 2009). It is important to look into the barriers which affect the concept of partnership working. Lester et.al(2008) comments that there are barriers to closer working in partnerships, which include cultural differences, the time factor which is required to create and maintain relationships and recognition of the advantages of remaining a small and autonomous organisation. Conclusion This essay has critically analysed the effect of the behavioural change approach intervention of postnatal depression to address the needs of women who are more at risk in the United Kingdom. Various factors which lead to postnatal depression have been explained in the essay. It can be concluded that postnatal depression is to be seriously taken care of, and that the impact of postnatal depression can have serious consequences for society. The various health promotion models have portrayed the linkages between beliefs and behavioral changes. The essay has pointed out the importance of partnership working in improving the conditions of mothers and newborn babies. Effective working in partnerships can go a long way in alleviating the concerns of the mothers and improving the mental health of the new born babies, as they play a very important role in framing the future characteristics of the new born babies. A recent study by the University of Leicester has found out that women are less likely to become depressed in the year after childbirth if they have an NHS health visitor who has undergone additional mental health training. These findings point out the fact that postnatal depression can be effectively tackled with external help. The studies about postnatal depression and the concept of partnership working have been very effective in improving the health care system in the United Kingdom and hence serve as an interesting topic for future researches in the field.

Friday, October 25, 2019

Operation Barbarossa - Hitlers Russian Offensive :: World War II History

Operation Barbarossa - Hitler's Russian Offensive The Russians Would never have joined the war if it weren't for the German invasion of 1941 - Operation Barbarossa. This parallels the USA intervention - they only joined because the Japanese bombed Pearl Harbour. Operation Barbarossa commenced on the 22nd June, 1941. Just over 3,000,000 German troops invaded the USSR. Stalin doubted the country ability to perform well on the battlefield since the Finnish War, refused to counteract the Germans preparations, for fear of provoking them into war. The Russians concluded that the German form of attack - The Blitzkrieg - would not be possible on Russia. The German infantry outnumbered the Russian, but the Russians had more artillery and aviation forces. The Russian infantry was told that it was not to retreat, do was destined to become destroyed or captured. The Germans set up 3 army groups, and assigned them to 3 different areas:- North - Leningrad Central - Moscow South - Kyyiv The generals agreed that they had to lock the Russian forces into battle, in order to prevent them escaping into the rest of the vast country. However, they disagreed on how to do this. The majority of them thought that they would sacrifice everything to protect Moscow; the capital; the centre of industry; the centre of all the networks and transport. Hitler disagreed. He believed that the Ukrainian area - for its resources - and the oil of the Caucasus were much more crucial. A compromise was made. Army Group Centre would march towards Moscow. The victory was predicted for ten weeks ahead. This timing was crucial because it would be impossible to fight once the short Russian summer had ended. Things seemed to happen a lot faster. In the first month Germans had already encircled Bialystok and Minsk, and on August 5th, the Germans crossed the Dnepr River, the last natural obstruction to Moscow. The group defeated a small force in Smolensk, capturing another 300,000. When it had reached Smolensk, it was two-thirds of the way there. Hitler decided to change plan. He sent the group north to help the other two groups, ignoring the generals' protests, thereby stopping the advance to Moscow. On September 8th Army Group North had, together with the Finnish army, brought Leningrad to siege. On September 16th Army Group South had captured Kyyiv, with 665,000 prisoners.

Thursday, October 24, 2019

Le film et le roman

Many say that when comparing movies and books they differ a lot. Books provide a more detailed viewing of characters and the events that occur, whereas the movies leave out Information and sometimes deter the moral of the story. In the movie and book; Ell Suppliant Sarah, it can be seen that movies based on books do not portray the same events and themes occurred. Ultimately this takes away from emotions one feels towards certain situations. Differences can be seen in the relationships between certain characters.Also the way traits of certain characters are shown. Moreover, some events that happened were not the same and took away from the verbal meaning of the story. To begin with, the first difference that Is seen Is the relationships between certain characters, especially Julia and Bertrand. In the book, it can be seen that their relationship Is very tense and not so strong. This can be seen when Bertrand insults Julia in in front of his assistant about how Americans think they ar e the best and Julia thinks to herself (De Rosary, 36-37).From this quote one can see that the relationship between Julia and Bertrand lacks love and affection. Julia feels silly and ridiculed by Bertrand and does not understand why he chooses to act this way. However, in the movie Antoine is not in this scene and their relationship is strong and is working well. As well, later on Dana lee roman, Julia finds out she is pregnant and thinks Bertrand will be happy to know this. After telling him, she finds out that he is not happy she is devastated and this can be seen when she says >(156-166).This quote shows how terrible and angry Julia feels that Bertrand does not want to keep the baby. In movie, It Is seen that Julia is a little upset, however she quickly recovers and It does not seem to bother her for too long. For both instances. The book has a ore detailed way of portraying her thoughts whereas the movie fails to do so. In the end this takes away from emotions and attachment vie wers should feel towards her and leaves them surprised when they split up. Thus, that is how relationships are changed movie and book.Not only Is there a difference between relationships, but also the way characters are shown. In the book, Sarah's character Is naive and Innocent; however In the movie it Is Intelligent and clever. While at the camp, Rachel makes the plan to escape and at first Sarah hesitates, but then agrees. This can be seen when Rachel says Rachel lavish connivance. Less aliment s'Â ©cheaper. Less aliment quitter get 32). Although, in the movie, Sarah brings up the idea and tells Rachel. Moreover, in the book when Sarah and Rachel escape, the police officer knows Sarah and eventually lets her go, (139).Nevertheless, In the movie the police man does not know her and lets her go because of the sympathy he feels for all the kids. Through the events that happened at the camp, the differences were clearly noticeable. Sarah's character may have been changed to be more courageous because she is constantly reminded that she is to blame for hiding Michel. This is because her arenas yell at her for doing this whereas in the book, they do not put too much emphasis on It, illustrating that her parents know she is too young to understand the current situation.This takes away from her loving character and the fact that she under the pressure of her parents blaming her for what she did. Hence, this changes the way Sarah's character is shown in both the movie and the book. Furthermore, UN tauter difference est.. Queue, Dana lee roman, Julia and Zoe both go to America, although, in the movie Julia ends up going alone. This was different in the book because in the book Zoe plays an important role in encouraging her mother. This is significant because she is the one who forces Julia to not give up and actually go and meet William. Therefore, when she meets William, Zoe is not with her.Taking out these events takes away from Zoo's character and makes Julia see m stronger than she actually she is. Furthermore, this leads to William being in denial of his mother's past. In the book he is surprised by what he learns and chooses to be ignorant and neither William nor his father know about Sarah's past. However, in the movie they add in a scene of William meeting his sick father. In it he learns that his father actually knows what happened but never chose to tell him. This is significant because it changes the story as it is a secret that only he does not know about.As a result, this takes away from the whole purpose of Cilia's Journey of being able to tell William about his mother's past. Also this alters the theme of forgetting the past, as in the book Sarah keeps to herself and does not tell anyone. Otherwise in the movie Sarah tells someone but still ends up committing suicide because it was too much to handle. This shows that even the process of sharing her story didn't take away the burden from her heart. Therefore, it is evident that mo vies based on books do not portray the same events ND themes occurred and takes away from the emotions one feels towards certain situations.While reading a book, it allows one to use their creativity and imagine what is happening. Nonetheless movies Just show what is happening and sometimes end up changing the storyline. As seen in Ell Specialty Sarah, many things were changed, including the relationships between characters, the traits of certain characters, and changes in events. In the end, this altered the themes and made it hard for viewers to understand the story. Thus, it can be concluded that movies based on books have a lot of differences.

Wednesday, October 23, 2019

Impressionism †Monet and Renoir Essay

Impressionism was the name given to one of the most important movements in art history. It was the first of modern movements. Its aim was to achieve ever greater naturalism by a detailed study of tone and colour and, by an exact rendering of the way light falls on different surfaces. This interest in colour and light was greatly influenced by the scientific discoveries of the French physicist ‘Chevreul’ and by paintings by Delacroix. Instead of painting dark shadows using mainly different tones of grey and black, the Impressionists- like Delacroix – realised that when an object casts a shadow, that shadow will be tinged with the complemntary colour of the object. They did not use firmly drawn outlines but instead applied paint in small brightly coloured dabs, even in shadowy areas of their pictures. This lack of outline and multiplicity of small dabs of pure colour, when combined wih the impressionists interest in fleeting effects of light, give their pictures a c onstant air of movement and life, but also of Impermanence. There was nothing as formal as a manifesto or even an agreed programme among the Impressionists. They were all individual artists working in their own way, developing their own style. They were, however, agreed in a general way on a number of points regarding subject matter. Their work should be modern, observed with detachment, and not historical or emotional. The view being that the subject itself is not of particular interest, but the way in which the light and colour decorate it, as described by Monet, â€Å"for me, it is only the surrounding atmosphere which gives subjects their true value†. The impressionist artists often painted together in small groups, depicting open-air scenes on the banks of the Seine and in the parks and recreation places of the middle classes around Paris. The bathing place and floating restaurant at La Grenouillà ©re provided the location for a number of sketching trips for Monet and Renoir. In the later years of Claude Monet’s life, he devoted himself to creating a beautiful water garden at his home in Giverny, and painted this garden continuously. ‘Water Lily Pond – Harmony in Green’ is one of the many paintings of his garden and truly epitomizes the characteristics of the Impressionist style. The painting depicts a Japanese style bridge(which he designed himself) with a small pond, largely covered in lilies, running underneath it. Monet had a huge collection of Japanese prints, with many of the plants in his garden being ones that he saw in these prints. It is quite possible that this painting was inspired by one of these prints. In the painting, the weeping willows in the background are reflected in the water between the lilies. Although Monet loved plants and flowers and collected rare species, he was not interested in distinguishing them in a painting. It was their reflections in the water which interested him. The surface of the painting is a rich carpet of colour, with brush strokes of yellow, pink and lavender woven in with the shimmering green of the plants. The colours reflect a brilliant sunshine with the flowers indicated by blobs of white tinged with yellow and pink. He painted this view of the bridge from a small boat he kept moored for painting the water. Auguste Renoir (1841-1919), painted ‘Luncheon of the Boating Party’ in 1881 and it marks the end of his Impressionist phase. The painting is one of his last in an Impressionist style and truly captures the concepts and styles native to the movement. Soon after, he and Pissarro would divert from the ideals of Impressionism and change the course of their art. The scene is set in a restaurant at the riverside. This was a favourite spot for boating enthusiasts and their girlfriends. It is the end of the lunch and the remains of the food and drink are on the table. All appear to be enjoying themselves after the boating expedition. The composition of the picture is linked together by the interchange of glances among the members of the group. The girl in the centre leaning on the rail leads the eye to the three on the right. A relationship of some kind seems to be suggested by the artist. Among the group is the actress Ellen Andrà ©e, who posed in ‘Absinthe’ for Degas. The woman on the left-hand side with the dog is Aline Charigot, Renoir’s future wife and favourite model. The figures are posed in a natural manner and the composition is open, so the spectator feels part of the group. Both Monet and Renoir, were two of the leading members of the Impressionist movement, both epitomizing the ideals and characteristics of Impressionism in their art work. With the examples discussed above, the brushwork and colouring styles of the Impressionists are clearly shown in Monet’s ‘Water Lily Pond-Harmony in Green’. Equally significant, the subject matter and content agreed upon by the members of the movement, can be seen in Renoir’s ‘Luncheon of the Boating Party’, the painting being free of emotion, historical reference, it is viewed with detatchment and depicts the modernity of the time. Personally, I believe both Renoir and Monet to be some of the greatest artists of their time, adopting the different styles and establishing Impressionism, they were truly at the forefront of the movement. With all its characterists of the movement evident in their work, they are the perfect representation of the Impressionism.

Tuesday, October 22, 2019

Coso and Basel Essays

Coso and Basel Essays Coso and Basel Essay Coso and Basel Essay Financial Collapses and Regulations New England College of Business In an era of risky investments and failed financial institutions, additional importance is being placed on businesses implementing Enterprise Risk Management (ERM) plans. ERM is defined by the Institute of Internal Auditors (2012) as an approach designed to identify, quantify, respond to, and monitor the consequences of potential events implemented by management. Without an ERM plan, transparency to shareholders and internal accountability are nearly impossible to achieve. COSO and Basel are both reactive frameworks to increased regulatory changes that forced institutions to show more transparency to their financial reporting, in order to manage operational risks, mitigate the likelihood of a collapse, and ensure stability in volatile market conditions (Farnan 2004; Balin 2008); these measures increase confidence in investors. This comparative analysis of COSO and Basel seeks to indentify common measures that are necessary to form a functional ERM plan, the most important being the accountability of management and its communication with the Board (The New Basel Accord 2003). A Comparative Analysis of ERM Guidelines: COSO I/II and Basel I/II Introduction Due to the epidemic of failed financial systems seen over the past decade, agencies and private organizations (e. g. , Securities and Exchange Commission, NICE, etc. ) have set in place guidelines for the standardization of reporting and evaluating risk in an effort to eliminate surprise collapses in the future (NICE Systems Ltd. 2012). Alexander Campbell, Editor, Operational Risk Regulation, states that regulatory approaches are changing and requiring companies to streamline processes for monitoring internal risks at a company, such as fraud (NICE Systems Ltd. 2012). Common goals of organizing committees trying to tackle regulatory challenges are to improve communication between the board and management, increase shareholders confidence, and most importantly, for entities to thoroughly evaluate their liquidity so that in the event of a crisis, investors assets are secured (Bressac 2005; Decamps, Rochet, Roger 2003). This comparative analysis of COSO and Basel identifies the standards these documents set for institutions to maintain an Enterprise Risk Management (ERM) plan, as well as the affects these documents shortcomings and constraints have on entities which apply either COSO or Basel. Enterprise Risk Management (ERM) is defined by the Institute of Internal Auditors (IIA) (2012) as an approach designed to identify, quantify, respond to, and monitor the consequences of potential events implemented by management. It is important for all parties affiliated with an institutions ERM plan to clearly identify and understand the events that impact a companys value in order for the entity to achieve its objectives (IIA 2012). The frameworks COSO and Basel both attempt to be reactive solutions to public events in which lack of an adequate ERM plan has contributed to a collapse of a major institution or market which had a detrimental affect on the public (Farnan 2004; Lall 2009). Both documents have been explored by many key opinion leaders in the financial industry, and while each provides a set of guidelines for developing successful ERM protocols, each also fails to be foolproof. Shaw (2006) provides the argument that while the COSO standard was groundbreaking at the time, it was not meant to be a marking guide for controls. Moreover, in regards to Pillar 3 of the Basel Accord which depicts methods of Value-At-Risk (VAR) calculations, Standard and Poors noted that although these VAR methods appear to offer mathematical precision†¦they are not a magic bullet (Lall 2009). COSO and Basel can be seen as a significant step forward for the times (Saurina and Persaud 2008). Basel In 1974, the Basel Committee of Banking Supervision (BCBS) was created (consisting of the G10 plus Luxembourg and Spain) in light of the challenges from an increasingly internationalized banking system (Lall 2009). In the 1980s, it became clear (post-Latin America Debt Crisis, 1982) that a process was needed regulate the international banking system to mitigate risk and manage losses (Lall 2009). The first Basel Accord and Basel II, referred to as Basel, is a method of risk management, specifically for financial institutions operating on a multi-national level, that sets minimum capital requirements (8% of adjusted assets (Decamps, Rochet, Roger 2003)) that these institutions must uphold to minimize the risk of a collapse in the international banking system (Lamy 2006). Basel I, the first international accord on bank capital was established in 1988, by the BCBS (Finance Development 2008), with the goal to arrive at significantly more risk-sensitive capital requirements with the primary objective in line with ensuring stability in the international banking system (Lamy 2006). In 2004, Basel II was introduced, with amendments in response to the Quantitative Impact Study, QIS 3, (published in May 2003), an increase in the amount of capital banks must set aside for high-risk exposures, and changes from feedback from banks on Basel I (Finance Development 2008; Lamy 2006). The Basel framework is focused on three pillars: a minimum capital adequacy requirement, supervisory review, and market discipline (Decamps, Rochet, Roger 2003). Basel I was highly criticized for having a one size fits all approach to formulating institutions risk-weighted assets (with insensitivity to emerging countries), in addition to unrealistic capital requirements that discouraged even reasonable risk taking (Kaufman 2003). In response to these critiques, BCSB began to draft Basel II, in which the amendments to Pillar I (310 out of ~350 pages of the document (Balin 2008)) were most notable. Balin (2008) describes the menu of various options that Basel II encompasses for Pillar I, which allow institutions to choose the most suitable options dependent on a series of factors (i. e. , size, rating, etc. ). The minimum capital requirement pillar focuses on the least amount of capital a bank must maintain to be protected from credit, operational, and market risks (Ahmed and Khalidi 2007). In Basel II, the highly critiqued credit risk requirements were modified to decrease the one size fits all stigma of Basel I (Kaufman 2003). Additionally, Basel II takes into account loopholes found in Basel I that enabled banks to maintain their desired level of risk while cosmetically assuaging to minimum capital adequacy requirements, which was done mainly through a transfer of assets to holding companies and subsidiaries (Balin 2008). Similar to COSO framework, the first pillar of Basel seeks to unite various types of risks into an overall evaluation of capital requirements to safeguard shareholders and investors. Pillar 2, the Supervisory Review, is meant to insure that banks have adequate capital to support all the risks in their business including, but not limited to, the calculations in Pillar 1 (Kaufman 2003). This Pillar clearly defines of obligations of supervisory oversight against extreme risk taking; of note in this Pillar is line 680, which states: Supervisors are expected to evaluate how well banks are assessing their capital needs relative to their risks and to intervene, where appropriate. This interaction is intended to foster an active dialogue between banks and supervisors such that when deficiencies are identified, prompt and decisive action can be taken to reduce risk or restore capital (The New Basel Capital Accord 2003). The four principles of Pillar 2 seek to hold the supervisors responsible for implicating processes, reviewing, setting expectations, and intervening when warranted in regard to management of capital risks (The New Basel Capital Accord 2003). Pillar 3 seeks to protect against changes in asset prices (market risk) (Balin 2008), which is an addition to the credit risk factors of Basel I. Using the Value-At-Risk (VAR) model, banks were able to determine the probability of a portfolios value decreasing by more than a set amount over a given time period (Lall 2009). Critics of the VAR model, such as the International Monetary Fund (IMF), claim that it fails to account for extreme market events and assumes that the processes generating market events were stable (Lall 2009). COSO In July 2002, the Sarbanes-Oxley Act (SOX) was passed with the goals of increasing investor and public confidence in the post-Enron era and increasing management accountability, among others (Farnan 2004). Section 404 of SOX states that effective for some large companies, beginning December 31, 2004, a separate management report on internal control effectiveness and audit by the organizations external financial statement auditor is required (Farnan 2004). COSOs framework lays out a path for developing efficient operations and regulatory compliance methods, and has been established as the framework recommended by agencies such as the SEC for public companies to base their financial reporting on (Farnan 2004). The Committee of Sponsoring Organization of the Treadway Commissions (COSO) is comprised of five private organizations in the financial industry (COSO Web site 2012). The COSO organization was established in 1995 with the mission to provide thought leadership through the development of comprehensive frameworks and guidance on enterprise risk management, internal control and fraud deterrence, and attempts to enhance success and leadership, and minimize fraud in company reporting (COSO Web site 2012). Since its establishment, COSO has published frameworks aimed at helping publicly traded companies cope with tough new monitoring requirements mandated by the Sarbanes-Oxley Act (Shaw 2006), and to help businesses manage risk, by looking at business units as an entire entity, designed to improve organizational performance and governance and to reduce the extent of fraud in organization (COSO Web site 2012). The COSO framework is a cube comprised of four (three in COSO I) company objectives perpendicular to eight (five in COSO I) factors that together form a risk assessment program for which companies can reduce risks by realizing the amount of capital needed for consequences (Bressac 2005). Similar to Basel, COSO dictates that the board is responsible for overseeing managements design and operation of ERM (Bressac 2005). One factor that COSO framework includes is the measurement of a companys risk appetite, the amount of risk, on a broad level, an entity is willing to accept in pursuit of value (Rittenberg and Martens 2012). Many objectives that management sets for their company (i. e. , increase market share, win competitive tenders) include a substantial amount of risk, and COSOs strategic decision-making framework allows managers to present the objectives in relation to appetite to the Board for approval (Rittenberg and Martens 2012). Conclusions Both COSO and Basel were drawn to effectively respond to new implications (Sarbanes-Oxley Act (Shaw 2006) and new laws capital requirements for banks (Lamy 2006), respectively), and each have principles that can help institutions manage ERM more effectively. For example, The New Basel Capital Accord (2003) clearly articulates that setting a minimum amount of available capital resources is a vital element of the strategic planning process, and the three pillars devise a plan to do this. Bressec (2005) claims that COSO II framework articulates a way for managers to effectively deal with the events that create uncertainty for entities and create responses to minimize potential losses. COSO and Basel were both released in the infancy stage and flawed. Samad-Khan (2005) observed that COSOs creditability is diminished because consequences are predicted to occur much more frequently than had been historically recorded in the past. Supporters acknowledge that Basel II has arcane ideas, but defend that its still a step in the right direction because it increases financial oversight and makes sure banks wont be doomed by crises of confidence (Coy 2008). It is important to note that while COSO and Basel offer much protection against quantitative risk assessments, they must be coupled with the knowledge and insight of senior risk managements to be most efficient (Lall 2009; Samad-Khan 2005). Moreover, both COSO and Basel also provide constraints that limit the amount of risks institutions can endure, sometimes excessively. Pall (2009) discusses one failure in Basel II as the ability for developed-nation banks to skew their reports to their desired results, at the expense of their smaller and emerging market competitors and, above all, systemic financial stability. Samad-Khan (2005) emphasizes that historical data is still the most reliable way for companies to determine the probability for risk to occur. Start-ups will not have this historical data, therefore may overestimate their probability of risk using the likelihood x impact = risk calculation (Samad-Khan 2005) and miss out on potentially positive opportunities. Others against the provisions claim that both documents (e. g. , Basel in the Emerging markets) implement concessions that constrain potential growth by overcompensating for potential consequences and depleting lending capital for banks, which in the 1930s contributed to the Great Depression (Coy 2008). Historical events depict the need for more stringent regulatory guidelines in this era of financial market uncertainty. The most important common factor of Basel and COSO are that each clearly states that it is managements responsibility to have a functional ERM plan in place, and be in communication with the Board about potential risks that the company faces (Bressec 2005; The New Basel Capital Accord 2003). Holding management accountable for the risks the business takes, while making sure that the Board is in agreement with managements plan creates a necessary harmony of a checks and balances system, in turn creating a safer landscape for shareholders and the public to place faith in. When properly executed,

Monday, October 21, 2019

Good Eating essays

Good Eating essays Everyone has heard the adage "you are what you eat," but what does this saying truly mean? For one to be in good health, he or she needs to put good, nutritious food into his or her body that supplies them with lasting energy. Unfortunately, obesity is a pandemic that has swept across the United States, and the media's perpetual spotlight on the matter has made it a concern for the populace. Some critics believe that it is not the responsibility of the eater. We must propose the question, who is to blame? Do we sympathize with the working man and the poor who are unable to afford healthy foods by placing blame on corporations, do we take responsibility for our own health habits, or do we let others such as the government take the blame? In most cases, the person who is truly at fault when it comes to the topic of obesity and weight is the eater because he or she is the one making the conscious decision of what to eat. However, there are other cases. Sometimes there is not much one can do when they are living paycheck to paycheck in a low-income community, so they may need some help in order Obesity has even reached children proving that no one no matter what age, no one is safe from this disease. Other health problems arise when one is overweight such as diabetes, and, "According to the National Institutes of Health, Type 2 diabetes accounts for at least 30 percent of all new childhood cases of diabetes in this country" (Zinczenko 154). In his article "Don't Blame the Eater," David Zinczenko argues that the fast-food industry is contributing to the overwhelming percentage of childhood obesity in the United States. He observes that there are not any healthy alternatives for children and teens to take, so the only option they are left with is cheap and calorie infested fast foods. The blame is being put on corporations because fast-food patrons do not know exactly what they are putting...

Sunday, October 20, 2019

Seaborgium Facts - Sg or Element 106

Seaborgium Facts - Sg or Element 106 Seaborgium (Sg) is element 106 on the periodic table of elements. Its one of the man-made radioactive transition metals. Only small quantities of seaborgium have ever been synthesized, so theres not a lot known about this element based on experimental data, but some properties may be predicted based on periodic table trends. Heres a collection of facts about Sg, as well as a look at its interesting history. Interesting Seaborgium Facts Seaborgium was the first element named for a living person. It was named to honor contributions made by nuclear chemist Glenn. T. Seaborg. Seaborg and his team discovered several of the actinide elements.None of the isotopes of seaborgium have been found to occur naturally. Arguably, the element was first produced by a team of scientists led by Albert Ghiorso and E. Kenneth Hulet at Lawrence Berkeley Laboratory in September, 1974. The team synthesized element 106 by bombarding a californium-249 target with oxygen-18 ions to produce seaborgium-263.Earlier that same year (June), researchers at the Joint Institute for Nuclear Research in Dubna, Russia had reported discovering element 106. The Soviet team produced element 106 by bombarding a lead target with chromium ions.The Berkeley/Livermore team proposed the name seaborgium for element 106, but the IUPAC had a rule that no element could be named for a living person and proposed the element be named rutherfordium instead. The American Chemical Society disputed this ruling, citing the precedent in which the element name einsteinium was proposed during Albert Einsteins lifetime. During the disagreement, the IUPAC assigned the placeholder name unnilhexium (Uuh) to element 106. In 1997, a compromise allowed that element 106 be named seaborgium, while element 104 was assigned the name rutherfordium. As you might imagine, element 104 had also been the subject of a naming controversy, as both the Russian and American teams had valid discovery claims. Experiments with seaborgium have shown it exhibits chemical properties similar to  tungsten, its lighter homologue on the periodic table (i.e., located directly above it). Its also chemically similar to molybdenum.Several seaborgium compounds and complex ions have been produced and studied, including  SgO3,  SgO2Cl2,  SgO2F2,  SgO2(OH)2,  Sg(CO)6,  [Sg(OH)5(H2O)], and [SgO2F3]−.Seaborgium has been the subject of cold fusion and hot fusion research projects.In 2000, a French team isolated a relatively large sample of seaborgium: 10 grams of seaborgium-261. Seaborgium Atomic Data Element Name and Symbol: Seaborgium (Sg) Atomic Number: 106 Atomic Weight: [269] Group: d-block element, group 6 (Transition Metal) Period: period 7 Electron Configuration:  [Rn] 5f14  6d4  7s2 Phase: Its expected the seaborgium would be a solid metal around room temperature. Density: 35.0 g/cm3 (predicted) Oxidation States: The 6 oxidation state has been observed and is predicted to be the most stable state. Based on the chemistry of homologous element, expected oxidation states would be 6, 5, 4, 3, 0 Crystal Structure: face-centered cubic (predicted) Ionization Energies: Ionization energies are estimated. 1st:  757.4  kJ/mol2nd:  1732.9  kJ/mol3rd:  2483.5  kJ/mol Atomic Radius: 132 pm (predicted) Discovery: Lawrence Berkeley Laboratory, USA (1974) Isotopes: At least 14 isotopes of seaborgium are known. The longest-lived isotope is Sg-269, which has a half life of about 2.1 minutes. The shortest-lived isotope is Sg-258, which has a half-life of 2.9 ms. Sources of Seaborgium: Seaborgium may be made by fusing together nuclei of two atoms or as a decay product of heavier elements. It has been observed from the decay of Lv-291, Fl-287, Cn-283, Fl-285, Hs-271, Hs-270, Cn-277, Ds-273, Hs-269, Ds-271, Hs-267, Ds-270, Ds-269, Hs-265, and Hs-264. As still heavier elements are produced, it is likely the number of parent isotopes will increase. Uses of Seaborgium: At this time, the only use of seaborgium is for research, primarily toward the synthesis of heavier elements and to learn about its chemical and physical properties. It is of particular interest to fusion research. Toxicity: Seaborgium has no known biological function. The element presents a health hazard because of its inherent radioactivity. Some compounds of seaborgium may be toxic chemically, depending on the elements oxidation state. References A. Ghiorso, J. M. Nitschke, J. R. Alonso, C. T. Alonso, M. Nurmia, G. T. Seaborg, E. K. Hulet and R. W. Lougheed, Physical Review Letters 33, 1490 (1974).Fricke, Burkhard (1975). Superheavy elements: a prediction of their chemical and physical properties. Recent Impact of Physics on Inorganic Chemistry. 21: 89–144.  Hoffman, Darleane C.; Lee, Diana M.; Pershina, Valeria (2006). Transactinides and the future elements. In Morss; Edelstein, Norman M.; Fuger, Jean. The Chemistry of the Actinide and Transactinide Elements (3rd ed.). Dordrecht, The Netherlands: Springer ScienceBusiness Media.

Saturday, October 19, 2019

(Urgent) Law exam questions Essay Example | Topics and Well Written Essays - 750 words

(Urgent) Law exam questions - Essay Example The report that recorded by the witness who was also part of the company’s employees indicated that the icicle had been taken off the claimant’s left leg. The judge ruled that since the Santa and elf did not see the icicle as they discharged their duties in the usual and that there was protection scheme in operation the respondent was not in violation of duty. The judge added that icicle was invisible from the employees responsible (elf and Santa) because it was covered with a toy on one part and wall on the other side. So, had they seen it earlier then they could have taken it away and the claimant could not suffer the injury. The judge ruled that the respondent was not responsible for the damage suffered by the visitor because the security system in place could have protected the claimant from falling. In this case, the legal issue involved the duty of care the employees of the company owes the visitors. It is the companys mandate to ensure all measures are set in place to protect the visitors against any injury or joss during the time they are in the premise. In this case, the concern was whether the injury suffered by Dufosse when she fell upon stumbling against icicle was as a result of employees’ negligence. Following the application of an appeal, the appellant expressed dissatisfaction with the earlier ruling. The respondent on the other hand brought forth an argument that the appellant had contributed to injury by falling on the icicle. The judge argued that if the icicle was there to be fallen on then even the employees could have seen that icicle. Therefore, the issue as to whether the appellant had contributed to the injury she suffered was not in order hence there was no contributory negligence in the case. By ratio decidendi, the issue is to assess the base on which the judge of appeal arrived at the ruling of the case at hand. As stated earlier, the judge in the district court

Friday, October 18, 2019

Journey from LPN to RN Essay Example | Topics and Well Written Essays - 750 words - 1

Journey from LPN to RN - Essay Example ctations, educational/professional outlooks, and personal encounters which have helped to transform the experience of this author from an LPN to an RN. Ultimately, for me there were two paths which could have been taken with regards to transitioning from an LPN to an RN. These paths are as follows: upon completion and residency as an LPN for a period of approximately one year, I could have applied for a bachelor’s of science in nursing (BSN) degree double ultimately translate into an RN. An alternate means to achieve the same goal is after completing university to apply for and complete a Master’s of Science in Nursing degree which serves as something of an accelerated LPN to RN program without any requirements for prior work experience. Although the secondary option is perhaps the more strenuous, it cannot be said to be more difficult due to the fact that it does not have any type of residency or prior work experience requirement attached to it as does the first alternative. With regards to the personal experience of this individual, the path from LPN to RN has taken the first path which has been mentioned. Even thou gh hindsight is perfect, if it was possible to make the choice over again, it would necessarily be the same as it was the first time. This is due to the fact that this particular path has been able to provide me with a high degree of hands on experience and the application of knowledge directly into the field that pursuing the Masters program directly from the LPN would not have been able to provide. Likewise, upon entering the program, I had a strong personal desire to further my education due to the fact that my husband was suffering from a very serious condition; further encouraging me to do all that I could and pour myself completely into studying the requirements that were placed in front of me. Another primary reason that I chose this path was with regards to the fact that the RN’s scope and job responsibilities allowed for a far greater

Cubans in Miami Research Paper Example | Topics and Well Written Essays - 1250 words

Cubans in Miami - Research Paper Example In addition to this, the Cuban community is characterized by the low fertility levels due to their demographic structure. The reasons for their high social and economic status are first and foremost that women are in income generation activities more than the men. In addition, the Cuban was characterized by the presence of a strong ethnic closed society. Finally, the Cuban society was involved with post-revolutionary activities which helped them to fight for better living standards. The Cuban people are to have a strong cultural system. However, due to the differences and the way of life in the United of America, they have adjusted their values and beliefs and they have been to the American society. Several studies have suggested that about 1 million of the American population are of Cuban origin. More accurate data from the United States Bureau of Census conducted at about 1980 revealed that about 803,226 of the American population were associated with the Cuban descent, and this number of Cuban origin people is to have increased over the years (Lisandro129). The immigration of Cubans to America has always been linked to economic situations and political events on the island. Before the American government helped in ending the Spanish rule on the island in 1899, the northern Cuban neighbor had played a considerable role in Cuba’s economic and political issues. As the involvement of the US government intensified during the 19th and 20th centuries, the United States of America had become a preferred place of settlement for Cuban emigrants who have succeeded to get powerful positions in the financial, intellectual and political landscapes in the United States (L. Glenn 31). As statistics depict, the number of Cuban immigrants before 1885 was relatively low. However, about five years later, the number of Cuban immigrants to United States of America has more than tripled. New heights of immigration of Cubans were reached between 1897 and 1910 which is a

Summary of an article Essay Example | Topics and Well Written Essays - 250 words - 9

Summary of an article - Essay Example She then cited the issue of education. The trend identified was that there are more women graduating in college than men and that their career trajectory runs parallel with the growth of the knowledge economy. What this means, for Luscombe, is that women (who claims a big part in keeping the partnership strong) are no longer dependent on marriage because of their financial independence. She pointed out that two-thirds of divorces were initiated by wives. Finally, Luscombe concluded that marriage as the ultimate "merit badge" for successful personal life is no longer true. She argued that more and more people found that those things that can make them happy like sex life, companionship and children could all be achieved outside of the wedlock. All in all, Luscombe was quite persuasive with her arguments. She cited solid evidences to back her points. However, she fails to comprehensively address the marriage issue. She has recognized that it is an institution and, certainly, it takes more than money or economics to erode how people perceive it. While it is valid to say marriage is losing its appeal, it is important to cover all dimensions in explaining such

Thursday, October 17, 2019

MANAGEMENT RESEARCH PROJECT Paper Example | Topics and Well Written Essays - 3250 words

MANAGEMENT PROJECT - Research Paper Example The appraisal is conducted as part of the performance management process of the organization because how it is handled is what determines whether the organization is able to achieve its goals or not. It can further be said that a performance appraisal is an assessment and discussion of how an employee has performed in his or her work and this assessment is based purely on performance and not on the characteristics that are displayed by the individual employee. This process helps in the measurement of the skills that have been displayed and the things, which an employee has accomplished with as much accuracy and uniformity as possible. The understanding that is developed by the employee’s supervisor enables management to determine the abilities of individual employees and this ensures that they are placed in positions within Cathay Pacific which will further its growth and achievement of its goals. Furthermore, it is designed to help the company determine the areas whose performance needs to be enhanced as well as ensuring that the employees are provided with the opportunities that are necessary for the promotion of their professional growth. This process is done in methodical ways that gives the supervisors the opportunity to measure the payments that are made to their employees in comparison to the aims and objectives of Cathay Pacific. In addition, performance appraisal gives the supervisors the opportunity to make an analysis of the factors that determine how the employees perform over a certain period. A system helps the management of Cathay Pacific to be in a position where it is able to provide guida nce to its employees towards a path that will lead to their performing better in their jobs. In addition, while performance appraisal can be considered as an immensely important tool by supervisors to gain an understanding of the people who work under them, it is not necessarily the

Foreign language courses in public school Essay Example | Topics and Well Written Essays - 750 words

Foreign language courses in public school - Essay Example According to (Dillon 2010), this is a distressing news that many schools nationwide have stopped teaching foreign languages overlooking the fact that a greater number of linguists are present needed in America in an order to look after the global business and diplomacy. The talk about necessarily requiring the American students to take foreign language courses at the schools is weighty and fraught with positive merits because research shows that younger children are more able to develop familiarity with foreign languages and learn to speak them fluently than senior people are. Young school students are of that age when acquiring knowledge about new and difficult things does not create much hurdles for the students and they are able to go all the way through to acquire control over foreign languages, which are sure to assist them a lot in their later lives when they would have to survive in a culturally diverse society and interact with people speaking different languages. There are m any jobs which essentially demand the candidates to be bi-lingual. Jobs in the fields of teaching and business require an individual to be able to socially interact with many people from different backgrounds and who may be speaking different languages. Moreover, according to (Peckham, n.d.), â€Å"children in foreign language programs have tended to demonstrate greater cognitive development, creativity, and divergent thinking than monolingual children.† ... elds of teaching and business require an individual to be able to socially interact with many people from different backgrounds and who may be speaking different languages. Moreover, according to (Peckham, n.d.), â€Å"children in foreign language programs have tended to demonstrate greater cognitive development, creativity, and divergent thinking than monolingual children.† Early foreign language learning is also important because children are the future of a country and arming them with the tool of speaking foreign languages can help them in accepting different cultural beliefs. Even proponents believe that being bi-lingual is important and beneficial in the 21st century since globalization is a prominent feature of the present America, still they obstinately stand by the viewpoint that making the foreign language courses mandatory at school level is not a wise step and should be re-considered in many educational setups. Proponents suggest that though learning foreign langua ges has its merits, still importance of an individual’s freedom should never be forgotten and in the end, it should be the student him/herself and no other authority that should decide whether taking foreign language classes is important at school or not. Proponents also claim that essentially requiring the American students to take foreign language courses at schools is not a wise step because there is already a complex and tough academic course for the students that may virtually leave the students with no excess time for learning foreign languages. This claim may be true to some extent but can be suppressed by adjusting the curriculum in such a smart way that the students would not have to face troubles in adjusting their time between other courses and additional foreign language courses.

Wednesday, October 16, 2019

MANAGEMENT RESEARCH PROJECT Paper Example | Topics and Well Written Essays - 3250 words

MANAGEMENT PROJECT - Research Paper Example The appraisal is conducted as part of the performance management process of the organization because how it is handled is what determines whether the organization is able to achieve its goals or not. It can further be said that a performance appraisal is an assessment and discussion of how an employee has performed in his or her work and this assessment is based purely on performance and not on the characteristics that are displayed by the individual employee. This process helps in the measurement of the skills that have been displayed and the things, which an employee has accomplished with as much accuracy and uniformity as possible. The understanding that is developed by the employee’s supervisor enables management to determine the abilities of individual employees and this ensures that they are placed in positions within Cathay Pacific which will further its growth and achievement of its goals. Furthermore, it is designed to help the company determine the areas whose performance needs to be enhanced as well as ensuring that the employees are provided with the opportunities that are necessary for the promotion of their professional growth. This process is done in methodical ways that gives the supervisors the opportunity to measure the payments that are made to their employees in comparison to the aims and objectives of Cathay Pacific. In addition, performance appraisal gives the supervisors the opportunity to make an analysis of the factors that determine how the employees perform over a certain period. A system helps the management of Cathay Pacific to be in a position where it is able to provide guida nce to its employees towards a path that will lead to their performing better in their jobs. In addition, while performance appraisal can be considered as an immensely important tool by supervisors to gain an understanding of the people who work under them, it is not necessarily the

Tuesday, October 15, 2019

Apparel Industry Essay Example for Free

Apparel Industry Essay * Silk Steps by step process of manufacturing garments Design/sketch: In the process of manufacturing, sketching take place designs of cloths and their details are sketched. Pattern design: The pattern drafting method is used for the designing a pattern and the purpose of making this pattern is to create the sample garment. Sample making: The pattern is then sending to the sewing department so they will assemble it into a garment, this is usually stitched on calico or muslin which is an inferior quality of fabric and it reduces cost. Production pattern: This is used for huge production garment. The patterns of garments can be made by two methods CAD/CAM methods because they are consider as easiest method of designing pattern. Pattern grading: Grading is the process used for sized pattern. It is used for moving and adjusting the pattern for multiple sizes. Spreading and cutting: After grading and relaxing the fabric it will be cut  into equal pieces and then spread manually or by controlled system. Lastly the fabric is cut into the shape of the garment forms. Embroidery or screen printing: Embroidery and printing of designs took place only if it is said by the customers. Embroidery is done by using computerized equipment; each production line may include 10 to 20 embroidery stations. Sewing: Number of labor is indulging in the sewing process; this labor transforms the pieces of fabrics into designer garments. Garments are sew in an assembly line as it progresses down it get completed.

Monday, October 14, 2019

Design And Modeling Of Axial Micro Gas Turbine Engineering Essay

Design And Modeling Of Axial Micro Gas Turbine Engineering Essay ABSTRACT Micro turbines are becoming widely used for combined power generation and heat applications. Their size varies from small scale units like models crafts to heavy supply like power supply to hundreds of households. Micro turbines have many advantages over piston generators such as low emissions less moving parts, accepts commercial fuels. Gas turbine cycle and operation of micro Turbine was studied and reported . different parts of turbine is designed with the help of CATIA(Computer Aided Three Dimensional Interactive Analysis) software .The turbine is of Axial input and axial output type. Key words : Gas turbine , CATIA , Rapid Prototype , parts of turbine , nozzle , rotor Chapter 1 LITERATURE REVIEW Development of Micro turbine: A turbine can be used as a refrigerant machine was first introduced by Lord Rayleigh. In a letter June 1898 to Nature, he suggested the use of turbine instead of a piston expander for air liquefaction because of practical difficulties caused in the low temperature reciprocating machines. He emphasized the most important function of and cryogenic expander, which is to production of the cold, rather than the power produced. In 1898 The British engineer Edgar C Thrupp patented a simple liquefying system using an expansion turbine. Thrupps expander was a double flow machine entering the center and dividing into two oppositely flowing streams. A refrigerative expansion turbine with a tangential inward flow pattern was patented by the Americans Charles F and Orrin J Crommett in 1914. Gas was to be admitted to the turbine wheel by a pair of nozzles, but it was specified that any desired numbers of nozzle could be used. The turbine blades were curved to present slightly concave faces to the jet from the nozzle. These blades were comparatively short, not exceeding very close to the rotor hub. In 1922, the American engineer and teacher Harvey N Davis had patented an expansion turbine of unusual thermodynamic concept. This turbine was intended to have several nozzle blocks each receiving a stream of gas from different temperature level of high pressure side of the main heat exchanger of a liquefaction apparatus. First successful commercial turbine developed in Germany which usea an axial flow single stage impulse machine. Later in the year 1936 it was replaced by an inward radial flow turbine based on a patent by an Italian inventor, Guido Zerkowitz. Work on the small gas bearing turbo expander commenced in the early fifties by Sixsmith at Reading University on a machine for a small air liquefaction plant. In 1958, the United Kingdom Atomic Energy Authority developed a radial inward flow turbine for a nitrogen production plant. During 1958 to 1961 Stratos Division of Fairchild Aircraft Co. built blower loaded turbo expanders, mostly for air separation service. Voth et. developed a high speed turbine expander as a part of a cold moderator refrigerator for the Argonne National Laboratory (ANL). The first commercial turbine using helium was operated in 1964 in a refrigerator that produced 73 W at 3 K for the Rutherford helium bubble chamber. A high speed turbo alternator was developed by General Electric Company, New York in 1968, which ran on a practical gas bearing system capable of operating at cryogenic temperature with low loss. Design of turboexpander for cryogenic applicationsà ¢Ã¢â€š ¬- by Subrata Kr. Ghosh , N. Seshaiah, R. K. Sahoo, S. K. Sarangi focuses on design and development of turbo expander.The paper briefly discuses the design methodology and the fabrication drawings for the whole system, which includes the turbine wheel, nozzle, diffuser, shaft, brake compressor, two types of bearing, and appropriate housing. With this method, it is possible to design a turbo expander for any other fluid since the fluid properties are properly taken care of in the relevant equations of the design procedure. Yang et. al developed a two stage miniature expansion turbine made for an 1.5 L/hr helium liquefier at the Cryogenic Engineering Laboratory of the Chinese Academy of Sciences. The turbines rotated at more than 500,000 rpm. The design of a small, high speed turbo expander was taken up by the National Bureau of Standards (NBS) USA. The first expander operated at 600,000 rpm in externally pressurized gas bearings. The turbo expander developed by Kate et. Al was with variable flow capacity mechanism (an adjustable turbine), which had the capacity of controlling the refrigerating power by using the variable nozzle vane height. India has been lagging behind the rest of the world in this field of research and development. Still, significant progress has been made during the past two decades. In CMERI Durgapur, Jadeja developed an inward flow radial turbine supported on gas bearings for cryogenic plants. The device gave stable rotation at about 40,000 rpm. The programme was, however, discontinued before any significant progress could be achieved. Another programme at IIT Kharagpur developed a turbo expander unit by using aerostatic thrust and journal bearings which had a working speed up to 80,000 rpm. Recently Cryogenic Technology Division, BARC developed Helium refrigerator capable of producing 1 kW at 20K temperature. Solid Modeling using CAD software CAD software, also referred to as Computer Aided Design software and in the past as computer aided drafting software, refers to software programs that assist engineers and designers in a wide variety of industries to design and manufacture physical products. It started with the mathematician Euclid of Alexandria, who, in his 350 B.C. treatise on mathematics The Elements expounded many of the postulates and axioms that are the foundations of the Euclidian geometry upon which todays CAD software systems are built. More than 2,300 years after Euclid, the first true CAD software, a very innovative system (although of course primitive compared to todays CAD software) called Sketchpad was developed by Ivan Sutherland as part of his PhD thesis at MIT in the early 1960s. First-generation CAD software systems were typically 2D drafting applications developed by a manufacturers internal IT group (often collaborating with university researchers) and primarily intended to automate repetitive drafting chores. Dr. Hanratty co-designed one such CAD system, named DAC (Design Automated by Computer) at General Motors Research Laboratories in the mid 1960s. In 1965, Charles Langs team including Donald Welbourn and A.R.Forrest, at Cambridge Universitys Computing Laboratory began serious research into 3D modeling CAD software. The commercial benefits of Cambridge Universitys 3D CAD software research did not begin to appear until the 1970 however, elsewhere in mid 1960s Europe, French researchers were doing pioneering work into complex 3D curve and surface geometry computation. Citroens de Casteljau made fundamental strides in computing complex 3D curve geometry and Bezier (at Renault) published his breakthrough research, incorporating some of de Casteljaus algorithms, in the late 1960s. The work of both de Casteljau and Bezier continues to be one of the foundations of 3D CAD software to the present time. Both MIT (S.A.Coons in 1967) and Cambridge University (A.R.Forrest, one of Charles Langs team, in 1968) were also very active in furthering research into the implementation of complex 3D curve and surface modeling in CAD software. CAD software started its migration out of research and into commercial use in the 1970s. Just as in the late 1960s most CAD software continued to be developed by internal groups at large automotive and aerospace manufacturers, often working in conjunction with university research groups. Throughout the decade automotive manufacturers such as: Ford (PDGS), General Motors (CADANCE), Mercedes-Benz (SYRCO), Nissan (CAD-I released in 1977) and Toyota (TINCA released in 1973 by Hiromi Arakis team, CADETT in 1979 also by Hiromi Araki) and aerospace manufacturers such as: Lockheed (CADAM), McDonnell-Douglas (CADD) and Northrop (NCAD, which is still in limited use today), all had large internal CAD software development groups working on proprietary programs. In 1975 the French aerospace company, Avions Marcel Dassault, purchased a source-code license of CADAM from Lockheed and in 1977 began developing a 3D CAD software program named CATIA (Computer Aided Three Dimensional Interactive Application) which survives to this day as the most commercially successful CAD software program in current use. After that many research work has been done in the field of 3-D modeling using CAD software and many software have been developed. Time to time these software have been modified to make them more user friendly. Different 3-D modeling software used now-a-days are AUTODESK INVENTOR, CATIA, PRO-E etc. History of rapid prototyping Rapid prototyping is a revolutionary and powerful technology with wide range of applications. The process of prototyping involves quick building up of a prototype or working model for the purpose of testing the various design features, ideas, concepts, functionality, output and performance. The user is able to give immediate feedback regarding the prototype and its performance. Rapid prototyping is essential part of the process of system designing and it is believed to be quite beneficial as far as reduction of project cost and risk are concerned. The first rapid prototyping techniques became accessible in the later eighties and they were used for production of prototype and model parts. The history of rapid prototyping can be traced to the late sixties, when an engineering professor, Herbert Voelcker, questioned himself about the possibilities of doing interesting things with the computer controlled and automatic machine tools. These machine tools had just started to appear on the factory floors then. Voelcker was trying to find a way in which the automated machine tools could be programmed by using the output of a design program of a computer. In seventies Voelcker developed the basic tools of mathematics that clearly described the three dimensional aspects and resulted in the earliest theories of algorithmic and mathematical theories for solid modeling. These theories form the basis of modern computer programs that are used for designing almost all things mechanical, ranging from the smallest toy car to the tallest skyscraper. Voleckers theories changed the designing methods in the seventies, but, the old methods for designing were still very much in use. The old method involved either a machinist or machine tool controlled by a computer. The metal hunk was cut away and the needed part remained as per requirements. However, in 1987, Carl Deckard, a researcher form the University of Texas, came up with a good revolutionary idea. He pioneered the layer based manufacturing, wherein he thought of building up the model layer by layer. He printed 3D models by utilizing laser light for fusing metal powder in solid prototypes, single layer at a time. Deckard developed this idea into a technique called Selective Laser Sintering. The results of this technique were extremely promising. The history of rapid prototyping is quite new and recent. However, as this technique of rapid prototyping has such wide ranging scope and applications with amazing results, it has grown by leaps and bounds. Voelckers and Deckards stunning findings, innovations and researches have given extreme impetus to this significant new industry known as rapid prototyping or free form fabrication. It has revolutionized the designing and manufacturing processes. Though, there are many references of people pioneering the rapid prototyping technology, the industry gives recognition to Charles Hull for the patent of Apparatus for Production of 3D Objects by Stereo lithography. Charles Hull is recognized by the industry as the father of rapid prototyping. Today, the computer engineer has to simply sketch the ideas on the computer screen with the help of a design program that is computer aided. Computer aided designing allows to make modification as required and you can create a physical prototype that is a precise and proper 3D object. Chapter 2 CATIA(Computer Aided Three Dimensional Interactive Analysis) Introduction to CATIA CATIA is a robust application that enables you to create rich and complex designs. The goals of the CATIA course are to teach you how to build parts and assemblies in CATIA, and how to make simple drawings of those parts and assemblies. This course focuses on the fundamental skills and concepts that enable you to create a solid foundation for your designs What is CATIA . CATIA is mechanical design software. It is a feature-based, parametric solid modeling design tool that takes advantage of the easy-to-learn Windows graphical user interface. You can create fully associative 3-D solid models with or without constraints while utilizing automatic or user-defined relations to capture design intent. To further clarify this definition, the italic terms above will be further defined: Feature-based Like an assembly is made up of a number of individual parts, a CATIA document is made up of individual elements. These elements are called features. When creating a document, you can add features such as pads, pockets, holes, ribs, fillets, chamfers, and drafts. As the features are created, they are applied directly to the work piece. Features can be classified as sketched-based or dress-up: à ¢Ã¢â€š ¬Ã‚ ¢ Sketched-based features are based on a 2D sketch. Generally, the sketch is transformed into a 3D solid by extruding, rotating, sweeping, or lofting. à ¢Ã¢â€š ¬Ã‚ ¢ Dress-up features are features that are created directly on the solid model. Fillets and chamfers are examples of this type of feature. Parametric The dimensions and relations used to create a feature are stored in the model. This enables you to capture design intent, and to easily make changes to the model through these parameters. à ¢Ã¢â€š ¬Ã‚ ¢ Driving dimensions are the dimensions used when creating a feature. They include the dimensions associated with the sketch geometry, as well as those associated with the feature itself. Consider, for example, a cylindrical pad. The diameter of the pad is controlled by the diameter of the sketched circle, and the height of the pad is controlled by the depth to which the circle is extruded. Relations include information such as parallelism, tangency, and concentricity. This type of information is typically communicated on drawings using feature control symbols. By capturing this information in the sketch, CATIA enables you to fully capture your design intent up front. Solid Modeling:- A solid model is the most complete type of geometric model used in CAD systems. It contains all the wireframe and surface geometry necessary to fully describe the edges and faces of the model. In addition to geometric information, solid models also convey their topology, which relates the geometry together. For example, topology might include identifying which faces (surfaces) meet at which edges (curves). This intelligence makes adding features easier. For example, if a model requires a fillet, you simply select an edge and specify a radius to create it. Fully Associative:- A CATIA model is fully associative with the drawings and parts or assemblies that reference it. Changes to the model are automatically reflected in the associated drawings, parts, and/or assemblies. Likewise, changes in the context of the drawing or assembly are reflected back in the model. Constraints:- Geometric constraints (such as parallel, perpendicular, horizontal, vertical, concentric, and coincident) establish relationships between features in your model by fixing their positions with respect to one another. In addition, equations can be used to establish mathematical relationships between parameters. By using constraints and equations, you can guarantee that design concepts such as through holes and equal radii are captured and maintained. CATIA User Interface :Below is the layout of the elements of the standard CATIA application. A. Menu Commands B. Specification Tree C. Window of Active document D. Filename and extension of current document E. Icons to maximize/minimize and close window F. Icon of the active workbench G. Toolbars specific to the active workbench H. Standard toolbar I. Compass J. Geometry areaC:Documents and SettingsSatiraDesktopwindow.JPG C The parts of the major assembly is treated as individual geometric model , which is modeled individually in separate file .All the parts are previously planned generated feature by feature to construct full model Generally all CAD models are generated in the same passion given bellow : : Enter CAD environment by clicking, later into part designing mode to construct model. : Select plane as basic reference. : Enter sketcher mode. In sketcher mode: : Tool used to create 2-d basic structure of part using line, circle etc : Tool used for editing of created geometry termed as operation : Tool used for Dimensioning, referencing. This helps creating parametric relation. : Its external feature to view geometry in out : Tool used to exit sketcher mode after creating geometry. Sketch Based Feature : Pad : On exit of sketcher mode the feature is to be padded .( adding material ) Pocket: On creation of basic structure further pocket has to be created (removing material ) Revolve: Around axis the material is revolved, the structure should has same profile around axis. Rib: sweeping uniform profile along trajectory (adding material) Slot: sweeping uniform profile along trajectory (removing material) Loft: Sweeping non-uniform/uniform profile on different plane along linear/non-linear trajectory : Its 3d creation of features creates chamfer, radius, draft, shell, th à ¢Ã¢â€š ¬Ã‚ ¦ : Its tool used to move geometry, mirror, pattern, scaling in 3d environment On creation of individual parts in separate files, Assembly environment: In assembly environment the parts are recalled constrained.. Product structure tool: To recall existing components already modeled. : Assembling respective parts by mean of constraints Update: updating the made constrains. Additional features are: Exploded View, snap shots, clash analyzing numbering, bill of material. etc Finally creating draft for individual parts assembly with possible details The parts of the major assembly is treated as individual geometric model , which is modeled individually in separate file .All the parts are previously planned generated feature by feature to construct full model Generally all CAD models are generated in the same passion given bellow : : Enter CAD environment by clicking, later into part designing mode to construct model. : Select plane as basic reference. : Enter sketcher mode. In sketcher mode: : Tool used to create 2-d basic structure of part using line, circle etc : Tool used for editing of created geometry termed as operation : Tool used for Dimensioning, referencing. This helps creating parametric relation. : Its external feature to view geometry in out : Tool used to exit sketcher mode after creating geometry. Sketch Based Feature : Pad: On exit of sketcher mode the feature is to be padded. (Adding material) Pocket: On creation of basic structure further pocket has to be created (removing material) Revolve: Around axis the material is revolved, the structure should have same profile around axis. Rib: sweeping uniform profile along trajectory (adding material) Slot: sweeping uniform profile along trajectory (removing material) Loft: Sweeping non-uniform/uniform profile on different plane along linear/non-linear trajectory : Its 3d creation of features creates chamfer, radius, draft, shell, threadà ¢Ã¢â€š ¬Ã‚ ¦ : Its tool used to move geometry, mirror, pattern, scaling in 3d environment Chapter 3 GAS TURBINE Gas Turbine A gas turbine is a rotating engine that extracts energy from a flow of combustion gases that result from the ignition of compressed air and a fuel (either a gas or liquid, most commonly natural gas). It has an upstream compressor module coupled to a downstream turbine module, and a combustion chamber(s) module (with igniter[s]) in between. Energy is added to the gas stream in the combustor, where air is mixed with fuel and ignited. Combustion increases the temperature, velocity, and volume of the gas flow. This is directed through a nozzle over the turbines blades, spinning the turbine and powering the compressor Energy is extracted in the form of shaft power, compressed air, and thrust, in any combination, and used to power aircraft, trains, ships, generators, and even tanks. Chronology Of Gas turbine Development : Types of Gas Turbine There are different types of gas turbines. Some of them are named below: 1. Aero derivatives and jet engines 2. Amateur gas turbines 3. Industrial gas turbines for electrical generation 4. Radial gas turbines 5. Scale jet engines 6. Micro turbines The main focus of this paper is the design aspects of micro turbine. Applications Of Gas turbine : Jet Engines Mechanical Drives Power automobiles, Trains,tanks In Vehicles(Concept car, racing car, buses, motorcycles) Gas Turbine Cycle The simplest gas turbine follows the Brayton cycle .Closed cycle (i.e., the working fluid is not released to the atmosphere), air is compressed isentropically, combustion occurs at constant pressure, and expansion over the turbine occurs isentropically back to the starting pressure. As with all heat engine cycles, higher combustion temperature (the common industry reference is turbine inlet temperature) means greater efficiency. The limiting factor is the ability of the steel, ceramic, or other materials that make up the engine to withstand heat and pressure. Considerable design/manufacturing engineering goes into keeping the turbine parts cool. Most turbines also try to recover exhaust heat, which otherwise is wasted energy. Recuperators are heat exchangers that pass exhaust heat to the compressed air, prior to combustion. Combined-cycle designs pass waste heat to steam turbine systems, and combined heat and power (i.e., cogeneration) uses waste heat for hot water production. Mechan ically, gas turbines can be considerably less complex than internal combustion piston engines. Simple turbines might have one moving part: the shaft/compressor/ turbine/alternator-rotor assembly, not counting the fuel system. More sophisticated turbines may have multiple shafts (spools), hundreds of turbine blades, movable stator blades, and a vast system of complex piping, combustors, and heat exchangers. The largest gas turbines operate at 3000 (50 hertz [Hz], European and Asian power supply) or 3600 (60 Hz, U.S. power supply) RPM to match the AC power grid. They require their own building and several more to house support and auxiliary equipment, such as cooling towers. Smaller turbines, with fewer compressor/turbine stages, spin faster. Jet engines operate around 10,000 RPM and micro turbines around 100,000 RPM. Thrust bearings and journal bearings are a critical part of the design. Traditionally, they have been hydrodynamic oil bearings or oil cooled ball bearings. Advantages of Gas Turbine 1. Very high power-to-weight ratio, compared to reciprocating engines. 2. Smaller than most reciprocating engines of the same power rating. 3. Moves in one direction only, with far less vibration than a reciprocating engine. 4. Fewer moving parts than reciprocating engines. 5. Low operating pressures. 6. High operation speeds. 7. Low lubricating oil cost and consumption Chapter 4 MICRO TURBINE Micro turbine Micro turbines are small combustion turbines which are having output ranging from 20 kW to 500 kW. The Evolution is from automotive and truck turbochargers, auxiliary power units (APUs) for airplanes, and small jet engines. Micro turbines are a relatively new distributed generation technology which is used for stationary energy generation applications. Normally they are combustion turbine that produces both heat and electricity on a relatively small scale. A micro (gas) turbine engine consists of a radial inflow turbine, a combustor and a centrifugal compressor. It is used for outputting power as well as for rotating the compressor. Micro turbines are becoming widespread for distributed power and co-generation (Combined heat and power) applications. They are one of the most promising technologies for powering hybrid electric vehicles. They range from hand held units producing less than a kilowatt, to commercial sized systems that produce tens or hundreds of kilowatts. Part of their s uccess is due to advances in electronics, which allows unattended operation and interfacing with the commercial power grid. Electronic power switching technology eliminates the need for the generator to be synchronized with the power grid. This allows the generator to be integrated with the turbine shaft, and to double as the starter motor. They accept most commercial fuels, such as gasoline, natural gas, propane, diesel, and kerosene as well as renewable fuels such as E85, biodiesel and biogas. Types of Micro turbine Micro turbines are classified by the physical arrangement of the component parts:1. Single shaft or two-shaft, 2. Simple cycle, or recuperated, 3. Inter-cooled, and reheat. The machines generally rotate over 50,000 rpm. The bearing selection-oil or air-is dependent on usage. A single shaft micro turbine with high rotating speeds of 90,000 to 120,000 revolutions per minute is the more common design, as it is simpler and less expensive to build. Conversely, the split shaft is necessary for machine drive applications, which does not require an inverter to change the frequency of the AC power. Basic Parts of Micro turbine Compressor 2. Turbine 3. Recuperator 4. Combustor 5. Controller 6. Generator 7. Bearing Advantages Micro turbine systems have many advantages over reciprocating engine generators, such as higher power density (with respect to footprint and weight), extremely low emissions and few, or just one, moving part. Those designed with foil bearings and air-cooling operate without oil, coolants or other hazardous materials. Micro turbines also have the advantage of having the majority of their waste heat contained in their relatively high temperature exhaust, whereas the waste heat of reciprocating engines is split between its exhaust and cooling system. However, reciprocating engine generators are quicker to respond to changes in output power requirement and are usually slightly more efficient, although the efficiency of micro turbines is increasing. Micro turbines also lose more efficiency at low power levels than reciprocating engines. Micro turbines offer several potential advantages compared to other technologies for small-scale power generation, including: a small number of moving par ts, compact size, lightweight, greater efficiency, lower emissions, lower electricity costs, and opportunities to utilize waste fuels. Waste heat recovery can also be used with these systems to achieve efficiencies greater than 80%. Because of their small size, relatively low capital costs, expected low operations and maintenance costs, and automatic electronic control, micro turbines are expected to capture a significant share of the distributed generation market. In addition, micro turbines offer an efficient and clean solution to direct mechanical drive markets such as compression and air conditioning. Thermodynamic Heat Cycle In principle, micro turbines and larger gas turbines operate on the same thermodynamic heat cycle, the Brayton cycle. Atmospheric air is compressed, heated at constant pressure, and then expanded, with the excess power produced by the turbine consumed by the compressor used to generate electricity. The power produced by an expansion turbine and consumed by a compressor is proportional to the absolute temperature of the gas passing through those devices. Higher expander inlet temperature and pressure ratios result in higher efficiency and specific power. Higher pressure ratios increase efficiency and specific power until an optimum pressure ratio is achieved, beyond which efficiency and specific power decrease. The optimum pressure ratio is considerably lower when a recuperator is used. Consequently, for good power and efficiency, it is advantageous to operate the expansion turbine at the highest practical inlet temperature consistent with economic turbine blade materials and to opera te the compressor with inlet air at the lowest temperature possible. The general trend in gas turbine advancement has been toward a combination of higher temperatures and pressures. However, inlet temperatures are generally limited to 1750 °F or below to enable the use of relatively inexpensive materials for the turbine wheel and recuperator. 4:1 is the optimum pressure ration for best efficiency in recuperated turbines. Applications Micro turbines are used in distributed power and combined heat and power applications. With recent advances in electronic, micro- processor based, control systems these units can interface with the commercial power grid and can operate unattended. Power Range for diff. Applications . Chapter 5 DIFFERENT PARTS AND THEIR DESIGNING OF MICRO TURBINE ROTOR The rotor is mounted vertically. The rotor consists of the shaft with a collar integrally machined on it to provide thrust bearing surfaces, the turbine wheel and the brake compressor mounted on opposite ends. The impellers are mounted at the extreme ends of the shaft while the bearings are in the middle. NOZZLE The nozzles expand the inlet gas isentropically to high velocity and direct the flow on to the wheel at the correct angle to ensue smooth, impact free incidence on the wheel blades. A set of static nozzles must be provided around the turbine wheel to generate the required inlet velocity and swirl. The flow is subsonic, the absolute Mach number being around 0.95. Filippi has derived the effect of nozzle geometry on stage efficiency by a comparative discussion of three nozzle styles: fixed nozzles, adjustable nozzles with a centre pivot and adjustable nozzles with a trailing edge pivot. At design point operation, fixed nozzles yield the best overall efficiency. Nozzles should be located at the optimal radial location from the wheel to minimize vaneless space loss and the effect of nozzle wakes on impeller performance. Fixed nozzle shapes can be optimized by rounding the noses of nozzle vanes and are directionally oriented for minimal incidence angle loss. The throat of the nozzle has a n important influence on turbine performance and must be sized to pass t