master thesis cloud computing pdf

eyewear business plan ideas

Skip to main content. Location New York, United States. Salary Salary Not Specified. Posted Jul 13,

Master thesis cloud computing pdf sixties thesis statement

Master thesis cloud computing pdf

CASE STUDY GHOSTWRITERS FOR HIRE ONLINE

Pity, that how to write a movie reference in apa format can defined?

During the commitment phase the partners exchange results in such amount of benefits that they agree, implicitly or explicitly, to maintain the relationship and cement the partnership Klepper, The authors also set five subprocesses that work within the phases and move the parties closer or further from the success in the outsourcing relationship.

The subprocesses attraction, communication and bargain, power, norms and expecta- tions are considered the drivers that impact the phases of exchange relationship devel- opment and show how the phases would lead to a successful outsourcing agreement. Theoretical Background 20 Klepper defines each of the aspects and how do they should be managed in each phase of the relationship exchange. In short, attraction refers to the rewards provided direct to the client by the vendor and rewards inherited in the characteristics of the ven- dor.

Communication and bargain are related to the exchange of information between part- ners. This information concerns more than just the necessary information changed in the projects, but open revelation of needs and resources related to the future of the rela- tionship. As several relationship frameworks, bar- gain reflects an important concept in conflict resolution. A successful exchange rela- tionship is remarked by the level of bargain and communication between client and vendor to solve conflicts in an easier way Klepper, As the outsourcing relationship underlines, the reciprocity is the first expectation of the partners.

Expectation is based on trust and both concepts from a spiraling relationship that is critical to the development of a relationship and is a necessary foundation for an investment by both parties. Norms concern to the expected patterns of behavior in a relationship.

These norms are developed over the time and strengthened according to the information shared between the parties. Norms pave the way of strong commitments; through norms, partners can build a solid knowledge about how to achieve the expectation of each other and con- struct a winner outsourcing relationship Klepper, Power and justice is tightly related to concepts presented in the economical and strateg- ic outsourcing approaches.

Klepper explains the power and justice in the situa- tion when one party has power over a second party if the second is dependent on the first for valued resources, and this power is enhanced if there are limited alternative sources available to the second party. Power can be exercised in diverse ways and it can be considered just or unjust.

In a partnership it is unjust if just one party enjoys the ben- efit of the power exertion. However it is just if both parties have adequate rewards when the first exercises the power. For an outsourcing arrangement a careful exercise of power is necessary to sustain and deepen the relationship. The figure illustrates the relationship among the partnership development stages; deepen relationship factors and partnering relationships as an outsourcing enabler.

There will also be discussed, which companies for the interviews were taken and why. Furthermore a statement on the research credibility will ensure that the research at hand provides data that is of high validity and reliability. The overview, which goes from the research method that is used until the presen- tation of the conclusion, which will specifically analyzed in further sub and main chap- ters. A quantitative research can certainly generate statistical proof through methods like a survey research or quantitative questionnaires Dawson, Howev- er, for this specific research, numerical data will not be taken to answer the research question.

The reasons will be explained in the following. As the research at hand involves a lot of literature analysis, interpretation and inter- views to even get some experiences about that topic and investigate the answers of all interviewees, the research will take place towards a qualitative study. The research purpose is going towards an exploratory approach as insights are being searched within companies.

Two of the three principal ways after Saunders et al. Un- structured questions were asked to the interviewees to get as much valid data as possi- ble. It was also very crucial for answering the research questions not to force the inter- viewees for answering a specific problem, but rather to let them talk about their expe- rience so that it was then possible to relate these insights to the theoretical framework at hand.

As for the exploratory research at hand, every kind of research strategy can be taken, whether it is an experiment, survey, case study, action research, ground theory, ethno- graphy or archival research. As this research was searching for a strategy that enables to explore an existing theory by including interviews, observations and documentary ana- lyses, a case study strategy was taken for this research.

According to Saunders et al. A cross-sectional study would involve a so-called "snapshot" to be taken at a specific time, while a longitudinal study provide a research that is rather pursued of a longer period of time. With the studies at hand there is no intention to investigate the companies over a period of a long time. Hence this research leads towards a cross-sectional study. To understand the current stage of IT outsourcing in different companies and then connect the insights from their personal experiences into a theoretical model, interviews were chosen as a primary data collection method.

This method enabled an agile and straightforward communication to the top manage- ment of every company that was scrutinized. Research Method 24 3. Also it was crucial to establish contacts in the room of companies in Sweden, as internal information was shared.

From a provider perspective a large IT outsourcing and cloud computing provider and from the client perspective one SME and one large organization fit into those prefe- rences that were listed above. In fact for the research at hand three companies have been declared as perfectly suitable for each one of our preferences. They will be intro- duced in the following: Capgemini Capgemini is a management and IT consultancy acting world-wide in over 30 coun- tries with over 92, employees.

It consists of four strategic business units. In their service portfolio Capgemini offers strategy advisory and a broad range of IT- services. Companies can therefore outsource their IT in a tangible way servers and data centers while having external providers tak- ing care of their assets. Their so called Infostructure Transformation Services ITS enable companies to see a direct cost-reduction in their business with cloud so- lutions Capgemini, c.

An overview of the interview is available in the appendix. To the research Talentia has been proven to be a crucial part of the research as they were moving from a traditional service provider to a cloud solution. Since the com- pany has 15 employees, Talentia has IT expertise, which is rather limited. However that does not mean that their IT is not important. Having a traditional IT outsourcing solution before, Talentia recently decided to move to a Microsoft cloud solution along with moving their physical offices any- way.

The benefits that Talentia will get with this kind of change will be analyzed in Chapter 5. The write-up of the interview with the vice president of Talentia Helena Norder can be followed attached in the appendix. Alstom is acting in 70 countries with the support of currently around 80, employees Alstom Alstom has currently 2 data centers dedicated to their needs and is not based on any cloud architecture yet.

Lindsten still claims that a cloud solution could be a very cost-efficient solution on a local basis. The telephone interview with Pernilla Lindsten can be followed in the appendix. Hence Capgemini serves as an IT outsource provider while Talentia and Alstom are small and large-sized representatives from a client perspective.

As for the selection of the interviewee, it was important for answering the research questions the best, to have someone to speak to that is really familiar or even involved in the IT outsourcing process. However Helena Norder could give strategic business information about the new cloud contract and valuable reasons why Talentia decided to change to a cloud solution in its company.

For the interviews, companies were contacted via e-mail and telephone by showing interest in their evaluation process of the IT outsourcing part of their business. With the information, it was made clear to the companies, the research would benefit by disco- vering whether or not the IT outsourcing process to cloud computing would be much different to traditional IT outsourcing possibilities.

Research Method 26 The method how to relate the interview with the theoretical model is the following: When referring during the interviews to the theoretical model, the key concepts were not literally mentioned. Therefore reliability see Research Crediblity was assured. How- ever for qualitative research it is not the main objective to generalize the sample that has been chosen. This is why this research runs a series of unstructured interviews with a rather small sample of companies.

This is not only because it becomes more manageable but also because of the limitless time for the research to pick a moderate amount of companies, as cloud computing is not used in lots of companies yet. It is therefore certain that con- ducting the interviews with another group of companies the results of the research could certainly differ.

As for the population an approximate number cannot be provided as there is no scientif- ic source that proves what percentage of companies use cloud computing within their IT infrastructure. However, only 3 companies were spot on fitting needs to answer the research question properly.

This is because of the fact that cloud computing solutions are not used widely in companies. Still, cloud computing solutions become more and more attractive to businesses Capgemini, , hence it shows a huge potential for future research to even more companies to take part in this research. As for the access of secondary data a broad range of material could be accessed. The IT outsourcing concerns were fundamental to this research and have been researched for years.

As the concept of cloud computing is not as mature as the IT outsourcing models Vedin, , companies that tend to sell cloud computing in a way to make the most money out of it. Research Method 27 3. In the following both kinds of data collec- tion sources for this research will be introduced. This is done by getting data from personal interviews, docu- ments, observations or other data that has usually not existed or used before.

In this research the primary data will consist of qualitative data that will be obtained by interviews. Further details about the interview techniques were explained in 3. In that respect, research books, reports, journal articles and websites of reliable authors and organizations have been used to get information about Cloud Computing and its roots as much as the IT outsourcing concepts.

Crucial for this research was not to take sources from companies that aim to sell cloud solutions, as these materials might have been biased in terms of the hype that is currently existing about cloud computing. For the qualitative study at hand there are two main quality aspects that are being discussed within the following sub chapters.

All those threats will be discussed in the fol- lowing: As for the participant error aspect, there is the possibility that the interviewees were influenced by being a part of a research process Saunders et al. However dur- ing the process of interviewing we could not see any of those issues applying to the interviewees. Research Method 28 There is also the hazard that the participants of the interview could be saying whatever they are supposed to say from their authorities, the so-called participant bias Saunders et al.

As the interviewees and its companies did not give an impression of being pushed by someone, as well as the fact that the questions that were asked were not in- volving any personal opinions, makes sure that this threat of reliability would not apply to this research at hand. As the conducted interviews were taken in an unstructured way, the so called observer error can also lead to a threat of reliability as they are two interviewers having the po- tential of asking questions differently Saunders et al.

This threat was lessened by deciding that only one of the two authors of this research at hand will lead the inter- views, while the other person was responsible for making notes. The observer bias refers to the potential of the interviewers to interpret the answers dif- ferently Saunders et al. The answers were not of interpretative nature, but ra- ther based on concepts that could be associated with the theoretical framework.

How- ever it would be critical to lead the interviewees to the answers that are desired for the framework. Leading interviewees to answers that are expected from the interviewers would manipulate the very idea of the research. In order to keep reliability for the re- search, questions within the interviews did not literally mention the key concepts but rather the meanings of that while framing those concepts.

Also the answers could be inter- preted wrongly from the interviewers. As for the misinterpretation of the comments of the interviewees, the research have been sent to the companies, where it was offered to consult us as soon they see any mi- sunderstandings within the findings and analysis part. All those threats will be discussed in the following: As far as the history part of validity is concerned, Saunders et al. In this research cloud computing has been indeed hyped within companies that offer those solutions.

Hence it could influ- ence interviewees to fall into the hype as well. However the respondents of Alstom and Capgemini had an IT background and could give valid answers to what is behind the concept of cloud computing. Research Method 29 In terms of testing, there is a validity hazard of people acting differently when they know they are tested so it is then likely to mislead the findings Saunders et al.

In this research at hand this specific threat would only apply to the extent that the inter- viewees act and talk differently than they would do without it. However there was nei- ther an impression nor a necessity for the interviewees to act or talk differently when talking to the interviewers.

The threat instrumentation shows the hazard of participating employees trying to get across something that they are possibly interested in selling their own products Saund- ers et al. As a cloud solution provider Capgemini could have had the option to take the opportunity to sell their cloud products. The mortality threat refers to the fact that participants of studies could possibly leave the research process Saunders et al. As this research is not aimed at being a longitudinal study, this threat would not apply to the research at hand.

Maturation refers to the fact that unforeseeable external events can be happening which could hence lead to a misguidance of the findings Saunders et al. This threat could have happened to the research at hand as it is hard to influence those external changes. However there was no unexpected event happening that could have influenced the findings wrongly at the research at hand. Therefore the maturation threat would not apply here.

In terms of ambiguity about causal direction, correlations between cause and effect can be interpreted wrongly Saunders et al. As there are no interdependencies of that nature existing at the research at hand, this threat would not apply. By its very nature it refers to the external validity of the research and therefore whether or not the work can be applied to other research settings as well. For instance a case study within a small company would have difficulty to reason generalizability to bigger companies Saunders et al.

According to Dawson the objective of qualitative research is not necessarily to generalize, but rather to choose a small amount of people or companies, which then can give insights to a wider range of population. However it is certain that in this case, if there was chosen another range of companies, the results could be different. The research at hand was designed to offer as much generalizability possible. This is why it was decided to give not only a certain size of companies to have a say in the interviews, but rather to have broad range to offer for the qualitative analysis.

The generalizability concern existing for the research at hand is the reason, that three companies were taking part in this thesis. Every company represented one certain size of the company. It is therefore certain that if another group of companies within that population would be chosen, results can be different. However the insights that are tak- en from the three companies can show insights into the behavior of a wider range of research population.

Theoretical Framework 30 4 Theoretical Framework The challenge to construct a concise framework consists in the problem of structuring all the important aspects that will guide the empirical research and analysis. In order to get the whole picture of IT outsourcing, the theories presented in the theoretical back- ground resource based, agency costs, transaction costs, resource dependence and ex- change theory are put together.

The conceptual framework maps the main variables and concepts in each IT outsourcing theory, aiming to provide guidance and coherence to the empirical inquiry. Assuring the objective to connect all aspects involved in this study a conceptual frame- work is presented in the Figure The framework is fundamentally based on the work of Cheon et al.

However, considering the perspectives of each theory, they can be in- terpreted in different ways e. Figure Theoretical Framework for IT Outsourcing Complementarily to the theories and linkages between concepts presented in the framework, the tables from Table until Table are developed in order to shortly define each concept involved in the framework. The tables are structured in a sequence of theory short definitions followed by the respective variables definition.

Each definition makes sense in the context of the theory where each variable is cited. For example the definition of measurability is different according to the context in which it is applied; however the table makes clear that the definition of measurability refers to the agency costs theory. Moreover all the definitions provided in the table are already detailed in the section 2. According to the methods used in this study, the framework and tables of concepts are the main tools used to analyze the data collected through the interviews.

Indeed, this is the main link between the theoretical framework and the empirical work performed to answer the main questions touched by this study. In fact, it is already proven that this framework is used in the traditional IT outsourcing. However, the point to be reached is the possible usage of the same framework in the cloud computing business model.

In summary, the framework is not used in this work to prove if an IT outsourcing solution is recommended or not, but, in the opposite caus- al relationship, the objective is to take real cases and evaluate if the framework could be applied in the decision process. Taking the research objectives in consideration, it is important to understand each con- cept and variable that is present in the framework. Therefore it is possible to go deep in the real cases, scrutinize the information and get each important aspect to be considered in the IT outsourcing decision making process.

The transaction cost theory refers to the economic efficiency that can be reached through balance between production costs against transaction costs. The outside pro- viders can reduce production cost via economics of scale and transaction costs increase as a result of asset specificity, uncertainty and infrequency. The following table gives a summary of the sub concepts of the transaction cost theory.

Table Concept Definition of Transaction Cost Theory Transaction Cost Theory An asset is specific if it is necessary for the production of a good or ser- Asset Specificity vice and has much lower value in alternative uses. Specificity increases transaction costs. Uncertainty can be a result of aspects, such as actual cost of the produc- tion process itself, unpredictable market, technological, economic trends, Uncertainty contractual complexity and quality of outputs.

Avoiding uncertainty in- creases transaction costs and agency costs. The costs of relationship building, formulation of adequate contracts and ensuring consistency of goals between the contracting parties are the Infrequency main costs generated by infrequency. Infrequency increases transaction costs. Theoretical Framework 32 As for the agency cost theory the following can be said: The choice of the source of a product or service is the consequence of the type of a contract required.

In case of out- come-based contracts, the market governance and outsourcing is the best option. The choice of a behavior-based contract implies in hierarchical governance or in-sourcing of services or goods. Transaction costs are the sum of the monitoring expenditures by the principal, the bonding expenditures by the agent and the residual loss. Uncertainty, risk aversion, programmability, measurability and length increase agency costs.

All the con- cepts will be introduced in the following table. Table Concept Definition of Agency Cost Theory Agency Cost Theory Uncertainty can be the result of aspects, such as actual cost of the production process itself, unpredictable market, technological, eco- Uncertainty nomic trends, contractual complexity and quality of outputs.

Avoiding uncertainty increases transaction costs and agency costs. Occurs when the client passes the risk to the provider agent. The Risk Aversion more risk averse the agent is, the more expensive it gets to outsource outcome-based contract The degree to which appropriate behavior by the agent can be speci- Programmability fied in advance.

The Measurability easier the measurement of an outcome is, the cheaper is the option for an outcome-based contract generally outsource provider. In a long-term agency relationship the principal is able to learn about the behavior of the agent while in a short-term the information asym- Length metry is potentially higher. Consequently a long-term agency relation- ship is more favorable to a behavior-based contract generally internal provider. As far as the resource-based theory is concerned, firms can fill the gap between desira- ble capabilities and actual ones by acquiring from an external source outsourcing.

However to ensure competitive advantage, the resources should match the criteria of value, rareness, imperfect imitability and non-substitutability. The firm does not need to be the owner of capabilities that do not generate a competitive advantage. The four sub concepts of the resource-based theory will be defined in the following table. Rareness IT resources need to be rare or unique among the competitors Valuable and rare firm resources can only be a source of competitive Imperfect Imitability advantage if the competitors cannot or have a huge level of difficult imitate them, e.

Sustainable competitive advantage explains that there must be no Substitutability strategically equivalent valuable resources that are themselves either not rare or imitable. In terms of the resource dependence theory, it can be said that all organizations are de- pendent on some elements of their external environments to varying degrees due to the control these external environments have on the resources.

These three sub concepts will be explained in the following table. Table Concept Definition of Resource Dependence Theory Resource Dependence Theory Concentration refers to the widely dispersion of power and authority Environmental among the environment. A firm is less dependent of a provider if the Concentration power and authority is dispersed in diverse providers. Munificence refers to the level of availability or of critical resources. A Munificence firm is less dependent of a resource if it is easy available in diverse providers.

The number and pattern of the linkages among organizations. If the Interconnectedness firm needs lots of connections and good relationships with other companies it is more dependent than other companies. The exchange theory argues that a well developed contract is necessary but not suffi- cient for an outsourcing success. Therefore the theory emphasizes the role of the out- sourcing relationship and its influence in the outcome of an outsource arrangement.

An outsourcing relationship is set by attraction, expectation, norms, power and justice and communication and bargain, which will be defined in the following table. Theoretical Framework 34 Table Concept Definition of Exchange Theory Exchange Theory Attraction is generated by the rewards provided directly to the client Attraction by the vendor as much as rewards inherited in the characteristics of the vendor. Norms refer to the expected patterns of behavior in a relationship.

Through norms, partners can build a solid knowledge about how to Norms achieve the expectations of each other and construct a win-win out- sourcing relationship. In an outsourcing arrangement the careful and just exercise of power Power and Justice is necessary to sustain and deepen the relationship. Firstly, the framework developed in the chapter 4 is challenged as a valid model to a cloud computing evaluation. The analysis aims to find out if the theories could be used as a guideline in the IT outsourcing decision process, addressing the first research ques- tion of this paper.

Each IT outsourcing theory and respective concepts are individually confronted against the insights and information obtained in the interviews. Each theory presented in the framework is scrutinized in a separate section. The concepts that sup- port each theory are presented with examples extracted from the interviews. To provide a fast summary of the findings, the sections also present a table explaining how the concepts were or were not represented in the data collection.

Secondly, an evidence of the most important aspects to consider, when evaluating a cloud computing solution, is presented. The objective of this section is to identify im- portant points which are not covered by the framework theories and could be added or changed as a consequence of particularities in the cloud computing solution in relation with traditional IT outsourcing.

All the insights are results of the interpretation of the data collected and the interviews performed in this study. The use of an external provider market in a way to reach economic efficiency was the main argument of the companies interviewed.

The economics of scale were achieved by providers which use cloud computing architecture impacting directly in the price for the end users. SMEs could benefit immediately while large companies are more sensitive to the uncertainties present in a cloud solution, for example when considering the security of data. Low-specificity of services was a common point in traditional IT outsourcing and cloud computing services. Companies tend to outsource parts of their IT department which are not specific and consequently have standard alternatives offered by suppliers.

Other kinds of specificity, such as physical and human specificity, were also low in the cases assessed. There was no concern about the geographical location of the providers and their skills and knowledge about the applications and business. However the unanimity of the companies were concerned about the availability of the services contracted. As a matter of fact, price and availability were the two most important aspects in all compa- nies contacted.

Analysis and Empirical Findings 36 The level of uncertainty avoidance was found to be different from company to company and from industry to industry. From the supplier side the uncertainty in security and accessibility were fundamental when offering cloud solutions to clients. Talentia, a small-sized enterprise, appeared to be more flexible with uncertainty then Alstom as a large organization. Capgemini pointed out the sensibility in sectors of defense and gov- ernment to the security issues in cloud computing.

On the other hand, Alstom opted for an in-sourced solution to avoid the security uncertainty having two global data centers and therefore to just outsource services that needs to be available 24x7, which was not possible to be provided locally. But I think it would be beneficial to use cloud computing internally in our company in order to reduce the infra- structure locally and having all the servers for example in Basel Switzerland.

Talentia changed the technological solution, but stayed with the same outsourcing supplier. Alstom outsourced through a global de- cision. However, the local management keeps weekly feedback after they had an un- derperforming start. Capgemini is working on pilot projects with the main clients show- ing the importance to keep the relationship links whatever technology is chosen.

Companies showed to outsource parts of their IT department which are not specif- Asset Specificity ic and consequently have standard alternatives offered by suppliers in traditional IT outsourcing and cloud computing. The level of acceptance of uncertainty showed to be more sensitive in respect of security and accessibility. In a way to avoid uncertainty via cloud computing, the reputation of the provider was a key aspect in Uncertainty the decision process.

Traditional IT outsourcing, in-sourcing and pri- vate cloud computing were options for large companies which are more uncertainty sensitive. The infrequency was avoided for all the companies interviewed. Mi- gration from a traditional IT outsourcing to a cloud computing solution Infrequency showed the attempt to stay with the same supplier.

Pilot projects with cloud solutions have been the best practice to keep the relationship. Analysis and Empirical Findings 37 5. The agency cost theory proved to be extensive to evaluate cloud computing business model as a market option of product and service sourcing. The companies in- terviewed pointed out two important aspects that aim to reduce agency costs: the prefe- rence for well-known and experienced suppliers, and the emphasis in the importance of the contract and its performance indicators.

On the other hand the supplier side pre- sented the uncertainties of the cloud computing model. The immaturity of cloud com- puting presents some risks that any of the parts client and providers want to assume. For instance, the responsibility for the data stored in the cloud is a blurry point in the cloud computing agreements. The uncertainty aspects are directly linked with the concerns with provider reputation and contract performance indicators already cited.

These two aspects also converge to programmability and measurability. No matter whether it is a traditional IT outsourcing or cloud solution, the agreement is really important, that they are key performance indicators involved that they should meet.

In both cases the companies provide an obvious example of high level of programmability and need for measurability as a way to reduce agency costs. No difference between traditional IT outsourcing and cloud computing was en- countered. On the supplier side, Capgemini emphasizes the risks of the immaturity in cloud com- puting and how it influences the offer for different industries.

The security issues and the dependence of internet connections shape the offers and the risks, which the suppli- er is willing to assume. The maturation of the model will decrease the level of risk and consequently the risk avoidance from the supplier side. A lot of other companies though. The valorization of relationships between clients and suppliers was more important than the proper dura- tion of contracts.

The length of contracts stills not a pattern but varies case by case. Therefore even in outcome-based contracts the behavior of the supplier is taken into consideration and reflects in renovations to construct a new relationship representing an increase in the agency costs. All companies are working or have contracts that aim for the long-term client-vendor relationship.

As in the transaction cost, the level of uncertainty avoidance varies according to business for both cloud computing and traditional IT outsourcing. The data emphasizes the risks of the immaturity in cloud computing and how it influences the offer for different industries. The security Risk Aversion issues and the dependence of internet connections shape the offers and the risks which the supplier is willing to assume.

The concern about key performance indicators in the contracts re- flects the importance of measurability found out in the interviews. Measurability Moreover, the level of standardization of the services offered via cloud computing provides a high level of measurability.

The valorization of relationships between clients and suppliers was more important than the proper duration of contracts. There was no Length evidence of difference between the traditional IT outsourcing and cloud computing in terms of the length of the contracts. The theory proved to fit very well in the traditional IT outsourcing model where companies trace a strategy and outsource the parts that are not available internally and are necessary to achieve the strategy objectives.

With the cloud computing business model the theory proves to fit even better; firstly because of the nature of services and secondly because of the affordable prices. Analysis and Empirical Findings 39 The services and products offered via cloud computing present a fundamental characte- ristic; standardization. The characteristics of these services make them not a source of competitive advantage; however they are necessary to keep the lights on.

The lack of IT capabilities can be a barrier to growth and development of businesses. IT is in most of the cases an enabler to business operations and without IT resources the firm strategy would turn out to be limited in terms of conceiving matters and implementation.

To summarize using the resource-based theory, cloud computing services are not rare because they are provided for several vendors, they are common and available to all players in the market including competitors imitable and they are easily substitutable for other services or other suppliers that follow the same standard technical and func- tional. The value of the IT resources available in the cloud can be compared, for ex- ample with electricity. Standard and basic services that work as an input to several IT processes and enable the company to perform and to grow.

However the value of these services, when not integrated in the company processes or needs, is not high. The prop- erties and characteristics of cloud computing in the resource-based theory context justi- fy the view of cloud computing as an outsourcing modality. Another point is the affordable price that enables businesses which could not afford IT solutions before cloud computing, to implement adequate strategies.

The prohibiting costs of IT resources and capabilities were in several cases a barrier to SMEs growth. In a context where many IT services are available for all companies in a pay-per-use mod- el reduction of initial investment resource-based theory turns to be even a better theory to evaluate an IT outsourcing option. For Talentia and Alstom, IT is not the core business; however IT resources are a must when talking about their operational activi- ties.

However our IT is very important to us. Alstom is a multinational company supporting a huge amount of users, ap- plications and processes through their systems and have a strong need for IT efficiency and availability to satisfy its clients and support its businesses. Both companies chose a cost efficient IT outsourcing model to achieve better availability and service quality.

Alstom contracted an IT service solution available 24x7 for the end users what was not possible to do locally. Talentia migrated to a cloud computing solution that provides availability of the services and applications without the necessity of an internal IT in- frastructure.

Capgemini as an outsourcing provider has the function to provide services that attend the needs of the clients in the most cost efficient way. The objective to attend the demand for cost efficient solutions moves the suppliers to offer products that match the concepts of resource-based theory that are valuable, more rare, difficult to imitate and difficult do substitute.

If more services are offered the client has more choices less rare, easier to imitate and possible to substitute in-source services and the supplier can increase sales. However the challenge of sup- pliers is to fit a cost effective technology with the business needs of the client compa- nies.

As assessed through the interview made with Capgemini Sweden, IT departments are enthusiastic with cloud computing solutions, however from the business side the differences between traditional outsourcing and cloud computing are not clear yet. Table Findings related to the Resource-Based Theory Resource-Based Theory Resources outsourced were standard and basic services that work as an input to several IT processes and enable the company to perform Value and to grow.

Valuable in the operational level but not valuable as a competitive advantage provider. The services contracted via cloud computing and also via traditional Rareness outsourcing were not rare. Cloud computing raises concepts such as commoditization of IT and utility computing.

Resources contracted in a traditional IT outsourcing or cloud compu- Imperfect Imitability ting modality were offered for several suppliers with very similar func- tionalities. The resources outsourced were easily possible to imitate.

Following the other concepts, outsourced resources were easily subs- Substitutability titutable for other services or other suppliers which follow the same standard technical and functional. The theory considers that all firms are, to a certain extent, depen- dent on external resources and links the success of the companies with the ability to adapt to the environment.

This theory proved to be useful in the evaluation of cloud computing solution as an outsourcing model. As a relatively new solution in the IT industry; cloud computing is an option to all companies, some of them can see the benefits immediately while some of them judge that they do not fit in the model yet. However, cloud computing promotes changes in the IT environment and changes the way how firms buy IT resources. Nevertheless, the changes promoted by cloud computing and the new options in the IT market can be evaluated using the concepts presented in the resource dependence theory.

A company can consider the environmental concentration and how dispersed the power and authority is within the suppliers to evaluate a cloud computing solution. Companies which are not much sensitive to the dependence of the suppliers are much more flexible to adopt cloud computing. An example from the interviews is the case of Talentia that adopted a Microsoft cloud computing solution. The level of power and authority of Mi- crosoft is very high in relation of Talentia; however its competitors like Google apps and Salesforce decrease the level of environmental concentration.

In case of Alstom, as a multinational company, the level of dependence the company is willing to assume is lower. The study of adoption of a cloud computing solution is treated in the global level while the IT local director believes that private clouds can be cost effective in the case of Alstom.

The change and environmental adaptation are trade-offs between bene- fits of the solution and dependence level. As a small company Talentia has more flex- ibility to adapt to external changes, on the other hand Alstom has the core rigidities of a global company. In terms of munificence a cloud solution does not represent changes from a traditional IT outsourcing solution, a part of the maturity of the model.

An enormous number of providers are today in the cloud business which makes the companies less dependent of a particular provider. Both Talentia and Alstom were able to choose among offers for a set of different providers. Capgemini states that the offer of cloud computing is not an issue for any company; however the accountability of each supplier could be a concern.

So we have some challenges here. Nevertheless, cloud computing will become enormous due to the fact that more and more people are using different kinds of services. Talentia is clearly more dependent than Als- tom; however the different category of companies SMEs against large companies jus- tifies different strategies and consequently different needs to acquire external resources.

Talentia clearly stated in the interview that IT resources are not part of its core business and that they are not willing to own a complex IT structure. In this way the company decided for a cloud computing solution and increased the level of interconnection.

On the other side Alstom can support its IT needs with their two existing global data cen- ters. The option to outsource the customer services was based on improvements of availability and service and not necessarily the only choice the company had. The inter- connections of Alstom are weak and define a low level of dependence. The benefits that the company can experience acquiring resources via a cloud justify the choice. The strategy of Alstom is also coherent as a multinational company; however the interview also highlights a possible benefit of in- ternal private cloud solutions.

Table Findings related to the Resource Dependence Theory Resource Dependence Theory According to the data collection it is possible to say that companies Environmental which are not much sensitive with the dependence of the suppliers Concentration are more flexible to adopt cloud computing. An enormous number of providers are in the cloud business today, Munificence which makes the companies less dependent of a particular provider. The number and pattern of linkages among organizations presented in the data collection varies more according to the strategy of each company than according to the technical solution.

The need of exter- Interconnectedness nal resources is different among companies and depends of several aspects. There was no evidence of a distinction of an interconnec- tedness level between cloud computing and traditional IT outsourc- ing. Transaction and agency cost theories describe the contract and the tasks of monitoring the IT outsourcing agreement as sources of costs to the firms.

A set of concepts contribute to the success of the outsourc- ing arrangements according to the exchange theory: attraction, expectation, norms, power and justice, communication and bargain. It was possible to understand that ex- change theory can be very useful in the evaluation of cloud computing as an outsourc- ing model.

The concepts are easily related to the practice and the relationship outside the contract boundaries that also reflect the outsourcing reality. The immediate benefit of the contract of a cloud computing solution by not having to maintain an IT infrastructure was the first step in the decision making process.

Als- tom also shows that the global decision to outsource the customer services was based on the improvement of availability. The company improved from local restricted work- ing hours to 24x7 of availability provided by the outsourcing vendor. Capgemini, as a well known outsourcing provider, uses several channels to create attraction; for exam- ple web site, advertisement, outsourcing quality prizes and so on.

The practice of small pilot projects offered by Capgemini is a clear way to generate attraction and improve the relationship as a vendor with the clients. Expectation and norms are two concepts that appeared interconnected in the empirical data. Knowing the norms, partners can build a solid knowledge about how to achieve the expectation of each other and construct a winning outsourcing relationship. Talentia proved to have a solid relationship with the outsourcing provider.

The vendor was re- sponsible for the traditional IT outsourcing solution maintaining the infrastructure and applications. Changing the technological solution Talentia decided to maintain the rela- tionship since the vendor was also a cloud provider for Microsoft solutions.

The good relationship and the quality of the services turned out to be a fundamental reason to avoid the attempt of choosing another supplier. I spoke to the company that is doing our IT right now and our purpose of going into the cloud. This company was very responsive and informed us about the fact that they would also be moving to- wards cloud solutions with their customers.

The alignment of the expectation with vendor and local level was fundamental to adjust the performance. Capgemini has a history of long-term outsourcing relationships. The set of norms pave the relationship with giant clients such as IKEA or the government. Some concepts such as power and justice, communication and bargain, are easily ob- served in case of conflicts or necessary adjustments between partners.

It was possible to assess a necessary negotiation in the case of Alstom and the over performance of the outsource service provider. As cited before the local administration had key requisites that were not being attended by the supplier services.

Even though a global contract was set, the local Swedish administration entered in a negotiation with the supplier to align the needs of the local level. The communication channel was enabled and partners have set up periodical meetings to manage the relationship and requirements of perfor- mance, which are constantly changing. The communication and the level of bargain in this case were fundamental to both partners on one side how the supplier improves the performance and on the other side the client that has its needs attended.

The supplier behavior showed the balance of power and how the right management of relationship aspects can make an outsourcing initiative successful. The interview exemplifies how a well balanced relationship foments a win-win commercial relationship in IT outsourc- ing. As the relations with providers are still present in cloud computing, it is possible to understand that, there is not a considerable change in the relationship of the exchange theory considering traditional IT outsourcing and cloud computing solutions.

From the cus- tomer side the benefits of the technology adopted were more impor- tant than the provider appearing in both traditional and cloud solu- Attraction tions. From the supplier side the direct offer of cloud solutions, for example via pilot projects, was the key aspect that generates attrac- tion and improve client-vendor relationships Expectation was shown to be extremely important in relationship maintenance.

A client that changes technology showed a preference to stay with the same vendor expecting the same level of compliance. Expectation Over performing an outsourcing relationship was reason of realign- ment of expectation.

The kind of outsourcing solution was not relevant in relation of this concept. Norms were in a certain way merged with expectation in the data Norms collected. Norms showed to be extremely relevant in the construction long term relationships. The exercise of power and justice was avoided in the example col- Power and Justice lected in the interviews. The negotiation was the main action when the contract services were not being delivered as expected.

The communication and bargain showed to be essential in solving problems and fomenting outsourcing relationships. However there Communication and was no evidence of difference between cloud computing and tradi- Bargain tional IT outsourcing since the relationship with vendors is still a common point. These concepts were spontaneously cited within the interviews and were treated as key aspects that differentiate cloud computing from traditional IT outsourc- ing.

It is important to underline that both, weaknesses and strengths of the actual cloud computing model can be pointed out by using these concepts. Some of these aspects are cited in the previous sections where the theoretical framework is analyzed; however this section aims to scrutinize the special meaning of these concepts when inserted in the sourcing via cloud computing solutions. The following sub sec- tions describe in detail the concepts and how they were raised in the data collection.

Clearly they differ a lot when the definition of each concept is compared; however when inserted in the actual economical competitive context, where IT departments are obliged to cut costs and increase the level of the services provided, these two concepts are necessary together in the IT managers agenda.

Supported by a cost efficient architecture detailed in chapter 2 , products and services could reach economics of scale and be delivered in better prices than traditional IT out- sourcing possibilities. Talentia pointed out the reasonable price of the solution com- pared with the traditional IT solution they previously adopted. The price of the services contracted turned out to be much cheaper if the expenses with maintenance, hardware, software and personnel are included in the account.

This is basically why we decided to go for the cloud. Cloud Computing is a very price at- tractive solution […] Hence Cloud Computing is an easy way to keep up-to-date on a reasonable price level. The insights provided by Alstom IT director in Sweden suggest the de- velopment of a private cloud that are extremely more cost efficient than to have several local data centers.

The fact that most of the IT resources hardware and software are underutilized is mainly because they are measured to support seasonal overload making the infrastructure costly for companies. Capgemini forecasts that in 2 or 3 years cloud computing will play a very important role in the IT outsourcing modalities. From a supplier perspective, Capgemini points out that the initial costs to offer cloud computing and the reduced number of large com- panies that are using it are some of the reasons why the price is still not competitive enough.

With the gradual adoption of large companies and consequently broader adop- tion by SMEs, cloud computing will be even cheaper to buy and even more costless to offer. Those companies have to share the whole cost area. But the more and more business starts to enter cloud computing, the price will decrease.

The internet connection dependence is clearly identifiable in the conceptualization of cloud compu- ting. The same internet connection, that brings the advantage of broad availability and independence of specific hardware, software and maintenance for the clients, can be the source of vulnerability of the model in case of a lack of internet connection. Another point highlighted was the data security, the advantage of access and storage of the data in a cloud that can turn out to be the vulnerability if considered as the lack of regula- tion, possibility of not authorized access or responsibility for corrupted data incidents.

The issue of availability and vulnerability was emphasized in the interview with Cap- gemini. As a provider Capgemini was more sensitive regarding the issues in the data accountability and availability of internet connection. When offering an IT outsourcing solution Capgemini needs to consider these concepts additionally to the traditional me- thodology; more than evaluate what or how to outsource the benefits of cloud compu- ting that are still bounded by certain particularities as availability.

You also have the certainty that the actual data you would be using on your cell phone will not be gone when the cell phone will be lost. The data would be in the cloud, which you can access on other devices as well. However, there are also spots where there is a bad internet connec- tion. Hence in order for the cloud computing purpose to be fulfilled you need to have a perfect internet access wherever you are, because when there occurs spots where employees are unable to work, then you have a liability — Cloud Computing would then turn into a disad- vantage.

Alstom IT resources are structured in a mix of in-sourcing and traditional out- sourcing; cloud computing was an option considered on a global level but not adopted there. The level of key performance indicators and availability imposed by Alstom made a cloud computing solution not an option for the management team. The vulnera- bility of the solution, in the perspective of a multinational company, continues to be a constraint to adopt a cloud computing solution.

The level of importance of availability and vulnerability and the impact of these two concepts in case of failure justify a special analysis when evaluating cloud computing solutions as an IT outsourcing option. The benefits that cloud computing can bring showed to vary according to the company size and in- dustry. The interviewed experts agreed that cloud computing solutions can be extremely beneficial to SMEs, which, before cloud computing, could not afford big IT invest- ments and consequently could not support their business with the best IT solutions.

On the other hand large companies were described in terms of the structure, complexity and consequently difference of requirements for IT outsourcing solutions. Moreover the legacy systems and infrastructure of large companies need to be taken into considera- tion and they are not always designed to easily integrate with cloud technologies.

In terms of the industry some of the points already cited, such as security and availability, come up again as a critical issue for cloud computing solutions. The difference of sizes in the companies interviewed was a great evidence that compa- ny size matters when evaluating the benefits of cloud computing. The fifteen employees company considered cloud computing as a natural choice, since other SMEs were also migrating into the cloud.

This study does not cover the characteristics that differentiate SMEs and large compa- nies; however it considers the existence of these differences as a premise. Industries definitions are also broad and out of the objective of this study.

The main objective of considering company size and industry is the possible linkage between both concepts and benefits from cloud computing solution. Conclusion 49 6 Conclusion This study has explored two important topics within information technology and busi- ness management that are IT outsourcing and cloud computing.

IT outsourcing has been playing a relevant role in the current competitive environment, enabling several kinds of IT resources arrangements and aiming to improve business efficiency. The recent business model provided by a combination of cost efficient technologies are called cloud computing and appears as a new option of IT outsourcing.

Facing the novelty of the cloud computing model, the comparison with traditional IT outsourcing evaluation was a gap encountered in the academic and professional litera- ture. Therefore the research question was stated in terms of how the evaluation of cloud computing as an outsourcing option differs to the traditional IT outsourcing.

Aiming to answer the research question this study thoroughly investigated whether or not cloud computing can be evaluated using the set of the most recognized IT outsourcing theo- ries transaction cost theory, agency cost theory, resource-based theory, resource de- pendence theory, exchange theory. Complementarily, this paper highlighted important concepts, which should be specially analyzed when evaluating a cloud computing solu- tion.

Initially, this study provided a broad technical and business explanation about cloud computing and its particularities. It was important to clarify that the technologies in- volved in the solution are not recent, however the maturation of the business model and the market needs for more cost efficient solutions leverage the business success. A de- tailed review of the most recognized IT outsourcing theories was performed in a way to develop a theoretical framework effectively to challenge the data collection and to veri- fy whether the theories are suitable or not for the evaluation of cloud computing solu- tions.

The data that was collected from three Swedish companies represented three different groups; SMEs, large organizations and IT outsourcing suppliers, which was extremely important for the research at hand. The analysis of these three different groups was an important feature firstly because the companies represented three main groups involved in IT outsourcing arrangements.

Secondly, the content that was obtained through un- structured interviews provided a valuable raw material to perform the analysis and bring insights to this study. The first part of the research question is addressed through the confrontation of the data collected against the theoretical framework. This confrontation suggests that the theo- ries used to evaluate traditional IT outsourcing can also be used to evaluate cloud com- puting solutions. The framework appeared to be very complete since it involves differ- ent perspectives of IT outsourcing.

Moreover, scrutinizing all concepts, involved in each IT outsourcing theory represented in the framework, demonstrated a reasonable level of transparency and highlighted the coherence between the analysis and findings. Extrapolating the framework boundaries and considering the particularities in the cloud computing solution, the second part of the research question is fulfilled. The addi- tional concepts would provide a better understanding of the suitability of cloud compu- ting and assess the main strengths and weaknesses of the availability of cloud compu- ting technologies.

The study advocates that combined IT outsourcing theories and addi- tional concepts are able to provide a better output to the IT outsourcing decision process. Although this study succeeds in answering the main research question and contributes with new useful knowledge for professionals and scientific community, it is important to underline the constraints that bounded the research process and limited outputs.

Con- straints as time, budget, geographical broadness, confidentiality and the small number of companies that already have adopted cloud computing shaped the level of detail of this research outcome. However the coherence with the research methods applied pro- vides credibility to the outcomes and enriches the insights and knowledge generated in this study.

Reflection and Prospect 51 7 Reflection and Prospect As mentioned in the data collection constraints, there is a huge potential for future re- search to get more and more companies involved in cloud solutions and therefore more data collection possibilities.

However, as for now a broad range of consumers and ven- dors has been provided and gives an incentive for further investigation. For further research, other countries could be taken into account. As mentioned in Saunder's et al. In this case we use resources more efficiently than before. Eliminating Application Incompatibility Issue: It provides the suitable environment on a single machine to run different applications and operating systems regardless of their need to run on individual machines without affecting each other.

Rapid Return on Investment: After investing virtualization, you would be paid back in less than a six month. Enhancing Business Continuity: It is based on the fact that every virtual machine is isolated so that in case of errors or crashes it would not influence other virtual machines. In addition, we can make an image of virtual machine in specified times in order to restore that to the point we need in case of failure. Therefore it helps resuming the operation as soon as possible.

Enabling Dynamic Provisioning: It is about sharing a physical storage among different virtual machines. Adding or removing storage corresponds to needs. This feature help underutilized storages to consolidate. In addition it prevents overloading the storages. Enhancing Security: There are two advantages of virtualization regarding security.

First as each user is only has interaction with its dedicated virtual machine. Thus he has no access to the other ones. Second if a virtual machine is affected by virus or other network attacks it would be mitigated inside that VM avoiding spreading out to other VMs. The last benefit is Agility that fulfilled in these ways: Providing a Logical IT Infrastructure: It is the way we look at the IT infrastructure such as hardware, software, storage, networks, and data as different layers.

The consequence obtained is simplification of provisioning, troubleshooting and management of systems. From another perspective consolidating servers facilitate the monitoring and management on a single server. Facilitating Self-managing Dynamic Systems: The more agile the IT infrastructure is the more dynamically it processes and respond.

Portability: Because of Storing virtual machine data on files inside physical server, data can be easily transferred to another one. VM Migration [10] As one of the best features and capabilities of virtualization, VM migration ensures the availability of applications.

In case of maintenance or troubleshooting, Systems need to be shut down for a while. By VM migration we can move the virtual machine running several operating systems and applications to another physical server without interrupting the application operation. To increase the uptime of application it is rather to use live migration that occurs while VM is running on server and continue the operation on the target system. VM migration must happen in the following conditions: [42] 1.

Resource utilization reaches the approximately percent that is risky to Service level agreement. Resource is underutilized therefore VM should migrate to another server and the inefficient server goes down. VM has a massive communication with VM working on the other server. The VM temperature overpasses the threshold because of workload so that it should migrate to another server to let overheated server to chill. Here are some advantages that VM migration brings to Cloud computing: Elasticity and Scalability: Turning on and off the VMs requires less actions compared to the same on the physical servers.

Workload Migration: With the use of live migration moving workload to other servers is much easier than across physical servers. Resiliency: This technique provides a safe shield for user services against the server failures. On server side, client accesses application or desktop on a remote server. On client side virtualization applications would be handed over to client on demand from remote server.

Desktop virtualization requires a certain QOS to meet the client demands and can be fulfilled by means of WAN optimization. One Advantage of virtual appliance is caused by the fact that having software based appliance is more cost effective than hardware based one with same functionality. Furthermore software appliance provides more availability without needing to buy more hardware.

Another advantage is that it leverages the networking actions in case of VMs migration. There must be a dynamic synergy between virtual server and appliance. Server Virtualization Virtual server allows consolidation in such a way that several VMs share the same physical server to run instead of having their own server that leads to less cost in terms of hardware, management for site infrastructure facilities and space.

Provision of VMs address the clients demands for additional resources urgently while VMs migration corresponds to guaranteeing the availability of services. Application can only interact with its own operating system and applications running upon that. This trend is so useful for web hosting as we offer the same services to users via different operating systems on a single machine like Facebook. The pitfall of this type is that as single operating system is used so it cannot run different types of applications due to their various demands in terms of patch level or version of operating system.

Hardware Emulation: This type of server virtualization means applications run on multiple operating systems as each of those are completely separated per virtual machine monitor VMM. VMM accommodates on the virtualization hypervisor. Operating systems can be from different vendors or having different versions or other characteristics. This approach concerns the virtualization software like hypervisor that emulates hardware to run operating systems on that hardware.

This is called virtual machine monitor or VMM. The pros of this method indicates that different operating systems run parallel so it is beneficial for test and deploying software in distinct environments. Applications running on the virtual system are slower than on the physical system.

Furthermore, another drawback referred to device drivers that hypervisor must contain to access the resources. If user wants to have a new device driver it would not be possible, therefore hypervisor which has no driver for that device cannot run upon the machine. Companies offering this kind of virtualization are Microsoft with the hyper-v component which is part of windows server and VMware including two forms: VMware server and ESX server.

XEN is the other technology performing hardware emulation which is open source. Another well-known platform for hardware virtualization is kernel based virtual machine KVM , which is a Linux operating system solution [43]. Para-virtualization: This concept is the same as hardware emulation but the difference is that it would allow only one operating system to access physical resources at a moment. It has two advantages.

First is that it put less overhead on performance compared to whole hardware emulation. The second one is to take benefits of capabilities of physical hardware in the server instead of using drivers which exist inside virtualization software. The important disadvantage of this approach is that the operating systems acquire to be modified in order to access the underlying resources thus they should be open source like Linux.

Xensource Company offers Xen as a pioneer example of Para-virtualization. In the experiment part we use Xen as server virtualization technology to reach energy efficiency goal. Figure 9 shows the virtualization architecture as different layers: the lowest layer is physical resources, second layer from down is VMM while next one operating system.

The highest layer is application. Another problem as applications are growing to be more and more internet based, multiple users would access the same data at a same time so it would cause some traffic stuck in the server. For these reasons date should be virtualized to avoid access dilemmas and improve the data management along with reducing cost. There is an important issue dealing with virtualization, Although it is a software technology, while eliminating some servers and migrating operating systems to virtual machines would make those target servers much more significant as each of them is hosting many virtual machines corresponding to a big population therefore losing one of them would be an enormous problem that must be taken into consideration.

First of all, we need to have a comparison between traditional data centre and Cloud data centre to clarify whether it is beneficial to transit computing to a Cloud environment or not [55,56]. It can be even one application running in data center like Facebook.

On the other hand Cloud data center is operational for single workloads. But when a workload becomes optimized Cloud data center acts more efficiently and economically. As opposed in Cloud data center service provider takes care of maintenance. In contrast Cloud data center uses homogenous hardware environment. It means it aggregates homogeneous resources to make them available to customers quicker than traditional data center can.

In contradictory Cloud data centers is shared across multiple enterprises. Adversely Cloud data center is almost using standardized infrastructure. It is quite clear that moving to the Cloud data centers is more cost effective and utilized than traditional one.

It is simpler to organize and operate Cloud data centers. They are scalable as you gain lower cost per user if you expand Cloud data center. There are many approaches regarding energy efficiency of Cloud datacentre at different levels such as hardware, operating system and data centre. Data centre level is the most important one and as a matter of fact addressing challenges about energy efficiency in this level sounds to be more complicated and comprehensive covering the other levels.

It uses DVFS and the goal is minimizing power consumption and keeping good performance. ECOsystem The system calculates the required power. Then distribute that to applications according to their priorities. Afterwards application consumes the power by resource utilization and throttling method.

The goal is reaching to battery lifetime on mobile systems. Nemesis OS In case of excess of threshold by applications in regard to their energy consumption, they must set their operation according to received signal from OS. Resource throttling is used here and the goal is getting to battery lifetime on mobile systems.

GRACE Global, per application and internal are three levels of adaption which are coordinated to ensure effectiveness. Resource throttling and DVFS are used here and the goal is minimizing power and having acceptable performance. DVFS is used as technique and the goal is minimizing power consumption and meeting good performance. Coda and Odyssey Coda signal application adaption via distributing a file to them whereas Odyssey does it allowing regulating the resource.

Resource throttling is used here and the goal is minimizing energy consumption and application data degradation allowance. PowerNap It Leverages short sleep modes to utilize resources by using dynamically deactivating components of system. The goal is minimizing power consumption and satisfying performance.

Having presented the operating system approaches for energy efficiency, now we introduce power management approaches at data centre level comprising two parts: projects that are based on virtualized data centre and those at which non virtualized data centres use them [44].

At last a Green Cloud architecture [8, 32] that seemed to be the most appropriate approach in terms of all aspects of sustainability would be discussed in details. Furthermore the reasons of why this approach surpasses the other ones would be explained. Cloud Balancing [35] It refers to a concept based on the global application delivery technology which can optimize performance, availability and cost control.

Cloud balancing route, redirect, split requests across multiple application controllers located in the Cloud data centers belonging to different Cloud providers. It can make decision based on myriad criteria such as user location, specific SLA, data center capacity, application response time, daytime, cost of application delivery and execution associated to each request and so on. Therefore depending on efficient criteria any Cloud provider can serve the request.

Cloud balancing is the evolution of global server load balancing with enhanced features transitioning from typical routing to content aware distribution across Cloud environment. Decisions are made by context aware global application delivery infrastructure. This goal is fulfilled through collaboration of local load balancing and global application delivery across data centers. Cloud balancing can be used in both virtualized and non-virtualized Cloud data centers. In the experiment part we use Cloud balancing as an energy efficient technique.

Server power switching technique is used. The goal is minimizing power consumption and performance degradation. Managing Energy and Server Resources in Hosting Centres It concerns economy as system regulates the cost of a resource and usefulness of assigning that to a service to maximize profit.

Recourses are marked by services from volume and quality point of view. There would be some servers elected to handle the service. Workload consolidation and server power switching is used as energy efficiency technique. Energy Efficient Server Clusters The system predicts the amount of frequency needed to meet the reasonable response time.

Afterwards it sub divides that to minor frequencies allocated to the number of nodes. Here a threshold determines when to turn on or off the nodes. Dynamic voltage frequency scaling and server power switching is used for this project. The goal is minimizing power consumption and meeting performance.

If a request cannot serve, using the same heuristic another server powers on to handle the allocated requests. This Project uses server power switching and consolidation. Hint: bin packing is a kind of problem in which certain number of bins of capacity contains objects of various volumes.

Besides it can define the power requirement for functions optimally. DVFS is used and the goal is to allocate all power budgets to lower the average response time. Environment-Conscious Scheduling of HPC Applications It corresponds to scheduling high performance computing applications by using five heuristics across Cloud data centers located in different geographically areas.

DVFS and leveraging of data centers distribution are meant to be power saving technique. The objectives of this project are minimizing CO2 emission and energy usage along maximizing revenue. The goal of this approach is to minimize power usage while having expected performance. Hint: soft scaling is hardware scaling emulation that uses the VMM scheduling ability to make VM have less time to utilize the resource.

Coordinated Multi-level Power Management for the Data Center It is based on different power management trends treating dynamically across nodes to allocate power according to power budget. The goal is to meet power budget while considering performance and minimizing energy consumption.

Power and Performance Management of Virtualized Computing Environments via Limited Look ahead Control Every application status is preserved and predicted via simulation learning basic called limited look ahead control LLC using kalman filter. Resource Allocation using Virtual Clusters This research uses bin packing method to arrange request requirement for resources from most demanding to the least one. Resource throttling is its required technique in this project.

Performance satisfaction and maximizing resource utilization are the goals. Hint: Resource throttling is regulation of resources by means of algorithms. Global level scheduler controls the flow of resource for applications. It uses resource throttling as energy saving method. Maximizing resource utilization and fulfilling acceptable performance are its goals.

Shares and Utilities based Power Consolidation in Virtualized Server Environments Based upon a fact that quantity of resources allocated to a VM is specified and by use of a sharing technique the resources are distributed among VMs by Hypervisor. DFVS and soft scaling are energy efficient mechanisms and the goal is minimizing power consumption and meeting performance. Migration manager is in charge of VMs live migration while power manager is responsible for DVFS and power states adjustment.

Besides arbitrator decide for VMs migration and relocation of them. Resource Pool Management: Reactive Versus Proactive Workload placement controller optimizes Proactive global whereas reactive adaption occurs using migration controller. VM consolidation and server power switching are used methods. The aim is minimizing power consumption and satisfying performance.

DVFS and leveraging heterogeneity Cloud data center are beneficial techniques here. This approach accumulates resource requests using negotiations to users and green offering to them thereby idle servers can turn off. Resources are monitored by energy sensors to help resource allocation efficiently. Server power switching and VM consolidation are used here. The goal is minimizing power consumption without degrading performance.

It was forecasted that virtualized data centre approaches respond better rather than non- virtualized projects as they use one of the most energy efficient factors which is virtualization that uses server consolidation, resource provisioning and VM migration in addition to server power switching and DVFS to minimize power while satisfying performance. The last but foremost project is Green Cloud architecture which is as follows: [8, 32] 5.

Although minimizing energy usage is mostly beneficial, it does not guarantee the decrement in energy emissions for sure. For example using cheaper source of energy like coal would pollute the environment more than before. Therefore a green carbon aware architecture is proposed to reduce the carbon footprint of Cloud computing with preserving quality of service such as response time. This research is contributed in two parts: CO2 aware architecture and carbon and cost efficient policies for scheduling application workload across data centers like CEGP with lower CO2 emissions and more energy efficiency with the minimum deadline.

This architecture is energy efficient from user and Cloud provider perspective. Some researches presented before like GreenCloud architecture [33, 47] with the goal to lower energy consumption of vitalized data center by using VM migration and placement or Green open Cloud GOC solution proposed by [ 45, 46] designed for future data centers supporting advanced reservation by consolidating workload are energy efficient but not certainly reduces of CO2 emissions.

It means that energy efficiency is not always even to carbon efficiency. Another approach proposed by [34] considers the minimizing energy emission and maximizing cost simultaneously for non-virtualized data center. Here we talk about a framework that minimize carbon footprint as a whole considering all service models along with a carbon aware policy for Iaas providers.

Figure [10] shows the green Cloud architecture which can meet the environment sustainability approach form user and provider point of view. It includes following elements: 1. Third party: green offer directory listing green Cloud services and carbon emission directory lists energy efficiency information of those services. User: green broker accepts Cloud request containing Saas, Iaas or Paas and selects the greenest Cloud provider.

Provider: green middleware which makes the operation inside Cloud least carbon emission generated. These components may differ based on the kind of service model. Information regarding energy efficiency of services consisting of power usage effectiveness PUE , cooling efficiency of data centers, network cost and emissions is kept in the carbon emission directory.

Therefore green broker can access energy factors of services from this directory. Green broker is responsible to hand in Cloud services to the users along with scheduling applications to lease to users like typical Cloud Broker. First layer of green Broker includes dealing with Cloud service requests that need to be analyzed and their QOS requirement be specified. In second layer information retrieved from carbon emission directory and green offer directory concerning energy emissions of requested services and green offers would be used to calculate the cost and carbon footprint of leasing services.

Afterwards green policies decide about delivering services based on these calculations. Other green offers are likely to be offered by Cloud to users when there is no match to the request. Type of service user requires, specifies the amount of carbon footprint, and would be the sum of CO2 emission generated for transmission and service processing at data center.

In order to have full aware Cloud computing, middleware green deployment must be considered within different service models as follows: Saas: Saas providers mostly serve their own software or lease them from Iaas providers. So it is essential to have power capping technology to restrict this kind of service by users as it is realized reasonable in situations where users are oblivious against environment sustainability such as social networking and game applications.

Saas providers can have green data centers offering green software services. Hint: Power capping refers to limiting power used by a server. Therefore necessity to some energy efficient components such as green compiler and CO2 emission measure tools for users to monitor the greenery of their applications should be considered.

Iaas: This service supports other services Saas and Paas in addition to its current task which is providing the infrastructure as a service for Cloud users. Thus Iaas plays a significant role in the green Cloud architecture. Cloud providers use the up to date technologies like virtualization, with features like consolidation, live migration, and performance isolation for data centers to have the most energy efficient infrastructure.

Environmental sensors are responsible to calculate the energy efficiency of data centers. This information is maintained in the carbon emission directory. Green resource provisioning policies and Scheduling are useful in energy consumption. Furthermore Cloud providers can motivate users by green offers to use services in off-peak hours while data centers working at higher rate of energy efficiency.

Scheduling Policies [8, 34, 48] Sustainability covers two aspects: minimizing carbon footprint and cost and maximizing profit. It finds the data center with the least carbon footprint at each period, and then VMs perform the job. Green Broker gets the best Cloud provider in terms of least carbon emissions from carbon emission directory. QOS of job is specified in different trends such as number of CPUs to run the job, deadline and execution time of job at certain frequency.

CEGP sorts the jobs according to their deadline then it would sort data centers based on carbon footprint. CEGP allocate jobs to data centers in Greedy way to decrease the carbon emissions. Afterwards, jobs are dispatched to VMs according to their ordering. There are other policies specially used in non-virtualized version of mention architecture: Minimizing CO2 Greedy Minimum Carbon Emission GMCE Applications are sorted earliest first deadline based whereas data centers located in different regions are treated descending due to their carbon efficiency.

Scheduler dispatches applications to data centers based on this ordering. Scheduler maps the applications to most profitable Cloud site according to this ordering. It is notable to mention that this green Cloud architecture uses DVFS before scheduling technique decide to allocate applications to Cloud sites.

Thus using Green policies sounds more reasonable to apply as scheduling policies instead of profit oriented ones. Now it is time to give a deeper and more tangible understanding of Cloud computing environment, in implementation section we model and analyse the well-known world spread large scale Cloud based application Facebook under various deployment models to study the behaviours of that concerning energy and cost and performance.

We have a brief introduction of simulation software followed by characteristics and capabilities. But there are some issues affecting these three goals like distribution of user bases, internet characteristics within different geographic regions, the policy of Cloud handling the resources to allocate to user and within and across different data centres using virtual machines.

Studying the behaviour of Cloud environment in real world is extremely complicated therefore the best solution is to do it in virtual manner by simulation. There are some toolkits to model and simulate Cloud environment like CloudSim, Gridsim and SimJava, but none of them but CloudAnalyst takes advantage of visual feature not tackling with programming troubles. We can modify and change the parameters easily and quickly, leading to better performance and precision.

CloudAnalyst is based on CloudSim toolkit and developed using java platform. High Level of Flexibility and Configurability for a Simulation Internet applications are complicated and require many parameters to be defined, therefore it is good to initialize and modify those and repeat the simulations to get to more accurate result. Visual Output You know the effect of a vision is several times more valuable than words.

Statistics concluded from simulation are preferred to be in forms of tables and charts to be more effective. Repetition Ability It is a requisite for simulation software to produce the similar output when we repeat that with the same input under same condition in order to be trustworthy. It is also useful to be able to save output and input in a file. Easy to Extend It supports ability to extend to meet the future requirements and parameters of complex internet applications.

CloudAnalyst consisting of following components and responsibilities: GUI Package: Graphical user interface offered to users is easy to configure and repeat the simulation in an accurate and efficient manner. User Base: This concept represents a group of users acting as a single entity that generates traffic for simulation. Internet This feature is responsible to model internet and traffic routing in smaller scale considering delay and bandwidth between regions. Region CloudAnalyst divides the world into 6 regions corresponding to each continent to simulate the real world application behaviour.

Internet Cloudlet It is regarded to a group of internet requests as a single one bearing information like size of input and output files, size of request, source and destination ID used for routing via internet and number of requests. Datacentre Controller It is the most important component of CloudAnalyst. Each datacentre has a controller that manages VMs, load balances traffic and routes requests to them via internet.

Active monitoring load balancer: this policy tries to allocate the equal requests to each VM maintaining them in a similar condition for processing. Throttled load balancer: it allocates the certain amount of CloudLets to a VM at a time. Cloud Application Service Broker It decides which datacentre should serve the requests from user bases.

Here there are two practical types of routing policies to control traffic via Broker: Service proximity based routing: the traffic is routed to the closest datacentre from user base in terms of latency by service broker. Performance optimizing routing: in this policy the service broker observe the performance of all data centres and would select the best one from the response time perspective to handle the requests.

In addition this policy shares the load from heavy loaded data centre that occurs at peak hours to the lesser loaded ones. Related Work A previous work conducted on Facebook [49, 50] assessed the cost and performance aspects of datacentres in terms of VMs and data transfer cost, response and processing times.

Here we have expanded the scenarios with different population distribution according to the latest statistics of Facebook around the world which indicates that Europe holds the most users among all continents compared to the previous work which is North America.

Furthermore in our experiments the overall population has jumped to around eight hundred millions compared to the former work which is around two hundred millions. In energy efficiency aspect we compared the energy efficiency before and after virtualization. Furthermore we investigated the effect of integrating virtualisation effect of IT and site infrastructure for energy efficiency.

Aim of Simulation When it comes to simulate a large scale application like Facebook it would be a tough task to accomplish because so many parameters need to be considered. The primary goal of our simulation is to study how Facebook behaves on Cloud environment.

Secondary one is to incorporate energy efficient aspects of Cloud computing into our simulation including virtualization, cost reduction and enhancing quality of service response time. We focus on use of virtualization as the most important factor to reach an energy efficient simulation in form of Para virtualization based on Xen technology.

Although our final approach in main part is Green Cloud architecture considering carbon footprint, with facilities and equipment we possessed for implementing the Cloud it was not feasible for us to simulate this theory as it requires very complex infrastructure, thus we propose the approach to study the energy efficient and sustainable Cloud based application with the only software as far as we know which is capable to simulate Cloud environment.

Service Broker has a list containing DataCenterControllers tagged by region. It would choose which DataCenterController serve the request based on the different policies we mentioned before embedded in Service Broker. Simulation Configuration [50] Our assumptions for simulation are described in tables below. Besides we assume that each user has new request every 5 minutes. Data size per request in bytes is defined Simulation duration is set to one day and Cost plan is based on the most famous Cloud provider Amazon EC2 [52].

We implement different scenarios to study the behaviour of Facebook under various conditions of configurations. This is equal to size of unit of data divided by dedicated bandwidth to each user T transfer. BW total is available bandwidth defined in internet characteristics while Nr stands for number of current user requests in transmission.

Internet characteristic considers the migration of user requests between different regions. This is equal to half of time taken for round trip ping T latency. Our main approach in scenarios is virtualization to satisfy energy efficiency. All figures are visible in appendix chapter chapter 9.

In experiment part we only condense results in a table. We verify only the energy efficiency of one datacentre placing in Europe prior to applying virtualization. In this scenario all requests for Facebook coming from all continents are processed on a single data center placed in region 2 Europe using 60 VMs on 20 physical servers using the default round robin policy as VMs load balancing and closet data center as server Broker policy.

The result from response times of different user bases illustrates that during peak hours one User base can influence other user bases. For instance during to while user base1 is generating so many requests other regions have less service demand compared to that. As can be seen from the response time figures they are fluctuating according to each user base peak load hours. During 24 hours resulting in at average time of On the other hand processing time of data center is Figures shown in appendix illustrate the average time taken by data center to serve a request from different regions during 24 hours and the number of requests served per hour.

It is obvious that the highest processing time corresponds to users from Europe and North America which have the most population among others. The same happens for the hourly loading figure. The other important factor for data center is cost Detailed results can be found on appendix. Also all results improved compared to scenario 0 caused by virtualization.

In this scenario we allocate one more data center to process the requests located in region 0 North America. In order to keep cost the same we split 60 VMs equally to each data center 30 VM for each one. VM policy is round robin and data center policy is closest data center. Figures of response times show an increment in for most user bases except user base 1 and 6. Regarding these user bases it is concluded that closer data center placed in North America which is serving them has had more effect than decreasing the number of virtual machines in that data center, therefore their response times decrease according to the fact that they are getting service from closer data center.

So decentralization is helpful in this case. In overall average response times it is obvious that the value has increased to The average processing time more than doubled to Data transfer cost per data center reduces cause by decentralization of data centers and closing them to user bases and less population allocated to each data center.

VMs Cost remains the same as we equally divided the VMs. From the hourly average processing time figures it can be concluded that we are missing the middle tower which transferred into the DC2 figure. The reason is that when we locate another date center in North America according to our default configuration that declares service would serve from the closest data center to users, User bases 1, 2 and 6 from North America, South America and Australia sends their requests to DC2.

The closest data center is defined on the basis of network latency configured in internet characteristics. The same reason is considered for Hourly loading of data center. As shown in energy efficiency table for each data centers since we decrease the number of VMs consolidation ratio IT infrastructure and server power experience one KW increment to 17 and 5 respectively that reflect in total power of data center and annual utility bill a little.

But PUE shows a better value as 1. Other alternatives of second scenario are 2. In this scenario we just increase the VMs on each data center from 30 to Data center policy is closest data center while VM policy is round robin. In this scenario the individual and overall average processing times fall more than double since we increase the VM number to 60 in each data center.

Data transfer cost per data center reduces in an unsimilar way cause by decentralization of data centers and closing them to user bases facing different population numbers. Overall average response time decreased specially we can see the response times for user bases 1, 2 and 6 lowered dramatically compared to first scenario as they are receiving service from the data center which is closer to them placed in North America whereas the other user bases response times remains almost still.

The hourly processing and loading pattern of data centers has decreased in height as there are more VMs to process the requests. Cost of data centers increases to grand total of As shown in energy efficiency table for each data centers since we increase the number of VMs consolidation ratio IT infrastructure and server power experience one KW decrement to 16 and 4 respectively that reflect in total power of data center and annual utility bill a little.

But PUE shows a worse value as 1. Below are detail results of this scenario. In this scenario we use two data centers having each 60 VMs by sharing load during peak hours. VM policy is round robin. In case of heavy load on data center using optimized response time server Broker policy we can share the load with lighter data center. As can be seen, the loading patterns of data centers has differed little bit from previous time because we spread the traffic between two data center.

Overall average response time increases a little since extra load moves from overloaded data center to lighter one therefore it takes more time to receive a service. Total cost remains still. As it is demonstrated in appendix the individual and overall average processing time has decreased almost twice from previous scenario since we add throttling. Response time experiences the great fall. Cost remains still. We see that since we added another data center it reduced overall average response times of data centers a little to The greatest fall in average response time happens for user base 4 placing in Asia from Since we add another data center in Asia DC3 the user base 4 residing in Asia would be served from DC3 from now on because that data center is closer to it.

Therefore data transfer cost of DC1 would reduce as it has less transmission due to elimination of user base 4. The loading and processing patterns of data centers are various because data centers are not fully utilized according to their request demands specially DC3 as it faces less requests due to less population. Overall processing time reduces as there are 3 data center contributors in processing. Total cost is In order to make processing times of data centers close to each other, improve the QOS offered by each data center to regions to experience the fair response time we need to modify our configuration.

Seems, popular dissertation abstract ghostwriters site for university yet

In this case we use resources more efficiently than before. Eliminating Application Incompatibility Issue: It provides the suitable environment on a single machine to run different applications and operating systems regardless of their need to run on individual machines without affecting each other. Rapid Return on Investment: After investing virtualization, you would be paid back in less than a six month.

Enhancing Business Continuity: It is based on the fact that every virtual machine is isolated so that in case of errors or crashes it would not influence other virtual machines. In addition, we can make an image of virtual machine in specified times in order to restore that to the point we need in case of failure. Therefore it helps resuming the operation as soon as possible. Enabling Dynamic Provisioning: It is about sharing a physical storage among different virtual machines.

Adding or removing storage corresponds to needs. This feature help underutilized storages to consolidate. In addition it prevents overloading the storages. Enhancing Security: There are two advantages of virtualization regarding security. First as each user is only has interaction with its dedicated virtual machine. Thus he has no access to the other ones.

Second if a virtual machine is affected by virus or other network attacks it would be mitigated inside that VM avoiding spreading out to other VMs. The last benefit is Agility that fulfilled in these ways: Providing a Logical IT Infrastructure: It is the way we look at the IT infrastructure such as hardware, software, storage, networks, and data as different layers.

The consequence obtained is simplification of provisioning, troubleshooting and management of systems. From another perspective consolidating servers facilitate the monitoring and management on a single server. Facilitating Self-managing Dynamic Systems: The more agile the IT infrastructure is the more dynamically it processes and respond.

Portability: Because of Storing virtual machine data on files inside physical server, data can be easily transferred to another one. VM Migration [10] As one of the best features and capabilities of virtualization, VM migration ensures the availability of applications. In case of maintenance or troubleshooting, Systems need to be shut down for a while.

By VM migration we can move the virtual machine running several operating systems and applications to another physical server without interrupting the application operation. To increase the uptime of application it is rather to use live migration that occurs while VM is running on server and continue the operation on the target system.

VM migration must happen in the following conditions: [42] 1. Resource utilization reaches the approximately percent that is risky to Service level agreement. Resource is underutilized therefore VM should migrate to another server and the inefficient server goes down. VM has a massive communication with VM working on the other server. The VM temperature overpasses the threshold because of workload so that it should migrate to another server to let overheated server to chill.

Here are some advantages that VM migration brings to Cloud computing: Elasticity and Scalability: Turning on and off the VMs requires less actions compared to the same on the physical servers. Workload Migration: With the use of live migration moving workload to other servers is much easier than across physical servers. Resiliency: This technique provides a safe shield for user services against the server failures. On server side, client accesses application or desktop on a remote server.

On client side virtualization applications would be handed over to client on demand from remote server. Desktop virtualization requires a certain QOS to meet the client demands and can be fulfilled by means of WAN optimization.

One Advantage of virtual appliance is caused by the fact that having software based appliance is more cost effective than hardware based one with same functionality. Furthermore software appliance provides more availability without needing to buy more hardware. Another advantage is that it leverages the networking actions in case of VMs migration. There must be a dynamic synergy between virtual server and appliance.

Server Virtualization Virtual server allows consolidation in such a way that several VMs share the same physical server to run instead of having their own server that leads to less cost in terms of hardware, management for site infrastructure facilities and space.

Provision of VMs address the clients demands for additional resources urgently while VMs migration corresponds to guaranteeing the availability of services. Application can only interact with its own operating system and applications running upon that.

This trend is so useful for web hosting as we offer the same services to users via different operating systems on a single machine like Facebook. The pitfall of this type is that as single operating system is used so it cannot run different types of applications due to their various demands in terms of patch level or version of operating system.

Hardware Emulation: This type of server virtualization means applications run on multiple operating systems as each of those are completely separated per virtual machine monitor VMM. VMM accommodates on the virtualization hypervisor. Operating systems can be from different vendors or having different versions or other characteristics. This approach concerns the virtualization software like hypervisor that emulates hardware to run operating systems on that hardware.

This is called virtual machine monitor or VMM. The pros of this method indicates that different operating systems run parallel so it is beneficial for test and deploying software in distinct environments. Applications running on the virtual system are slower than on the physical system. Furthermore, another drawback referred to device drivers that hypervisor must contain to access the resources. If user wants to have a new device driver it would not be possible, therefore hypervisor which has no driver for that device cannot run upon the machine.

Companies offering this kind of virtualization are Microsoft with the hyper-v component which is part of windows server and VMware including two forms: VMware server and ESX server. XEN is the other technology performing hardware emulation which is open source. Another well-known platform for hardware virtualization is kernel based virtual machine KVM , which is a Linux operating system solution [43]. Para-virtualization: This concept is the same as hardware emulation but the difference is that it would allow only one operating system to access physical resources at a moment.

It has two advantages. First is that it put less overhead on performance compared to whole hardware emulation. The second one is to take benefits of capabilities of physical hardware in the server instead of using drivers which exist inside virtualization software. The important disadvantage of this approach is that the operating systems acquire to be modified in order to access the underlying resources thus they should be open source like Linux. Xensource Company offers Xen as a pioneer example of Para-virtualization.

In the experiment part we use Xen as server virtualization technology to reach energy efficiency goal. Figure 9 shows the virtualization architecture as different layers: the lowest layer is physical resources, second layer from down is VMM while next one operating system. The highest layer is application. Another problem as applications are growing to be more and more internet based, multiple users would access the same data at a same time so it would cause some traffic stuck in the server.

For these reasons date should be virtualized to avoid access dilemmas and improve the data management along with reducing cost. There is an important issue dealing with virtualization, Although it is a software technology, while eliminating some servers and migrating operating systems to virtual machines would make those target servers much more significant as each of them is hosting many virtual machines corresponding to a big population therefore losing one of them would be an enormous problem that must be taken into consideration.

First of all, we need to have a comparison between traditional data centre and Cloud data centre to clarify whether it is beneficial to transit computing to a Cloud environment or not [55,56]. It can be even one application running in data center like Facebook.

On the other hand Cloud data center is operational for single workloads. But when a workload becomes optimized Cloud data center acts more efficiently and economically. As opposed in Cloud data center service provider takes care of maintenance.

In contrast Cloud data center uses homogenous hardware environment. It means it aggregates homogeneous resources to make them available to customers quicker than traditional data center can. In contradictory Cloud data centers is shared across multiple enterprises. Adversely Cloud data center is almost using standardized infrastructure. It is quite clear that moving to the Cloud data centers is more cost effective and utilized than traditional one.

It is simpler to organize and operate Cloud data centers. They are scalable as you gain lower cost per user if you expand Cloud data center. There are many approaches regarding energy efficiency of Cloud datacentre at different levels such as hardware, operating system and data centre. Data centre level is the most important one and as a matter of fact addressing challenges about energy efficiency in this level sounds to be more complicated and comprehensive covering the other levels.

It uses DVFS and the goal is minimizing power consumption and keeping good performance. ECOsystem The system calculates the required power. Then distribute that to applications according to their priorities. Afterwards application consumes the power by resource utilization and throttling method. The goal is reaching to battery lifetime on mobile systems.

Nemesis OS In case of excess of threshold by applications in regard to their energy consumption, they must set their operation according to received signal from OS. Resource throttling is used here and the goal is getting to battery lifetime on mobile systems. GRACE Global, per application and internal are three levels of adaption which are coordinated to ensure effectiveness. Resource throttling and DVFS are used here and the goal is minimizing power and having acceptable performance. DVFS is used as technique and the goal is minimizing power consumption and meeting good performance.

Coda and Odyssey Coda signal application adaption via distributing a file to them whereas Odyssey does it allowing regulating the resource. Resource throttling is used here and the goal is minimizing energy consumption and application data degradation allowance.

PowerNap It Leverages short sleep modes to utilize resources by using dynamically deactivating components of system. The goal is minimizing power consumption and satisfying performance. Having presented the operating system approaches for energy efficiency, now we introduce power management approaches at data centre level comprising two parts: projects that are based on virtualized data centre and those at which non virtualized data centres use them [44].

At last a Green Cloud architecture [8, 32] that seemed to be the most appropriate approach in terms of all aspects of sustainability would be discussed in details. Furthermore the reasons of why this approach surpasses the other ones would be explained. Cloud Balancing [35] It refers to a concept based on the global application delivery technology which can optimize performance, availability and cost control.

Cloud balancing route, redirect, split requests across multiple application controllers located in the Cloud data centers belonging to different Cloud providers. It can make decision based on myriad criteria such as user location, specific SLA, data center capacity, application response time, daytime, cost of application delivery and execution associated to each request and so on.

Therefore depending on efficient criteria any Cloud provider can serve the request. Cloud balancing is the evolution of global server load balancing with enhanced features transitioning from typical routing to content aware distribution across Cloud environment.

Decisions are made by context aware global application delivery infrastructure. This goal is fulfilled through collaboration of local load balancing and global application delivery across data centers. Cloud balancing can be used in both virtualized and non-virtualized Cloud data centers.

In the experiment part we use Cloud balancing as an energy efficient technique. Server power switching technique is used. The goal is minimizing power consumption and performance degradation. Managing Energy and Server Resources in Hosting Centres It concerns economy as system regulates the cost of a resource and usefulness of assigning that to a service to maximize profit. Recourses are marked by services from volume and quality point of view.

There would be some servers elected to handle the service. Workload consolidation and server power switching is used as energy efficiency technique. Energy Efficient Server Clusters The system predicts the amount of frequency needed to meet the reasonable response time. Afterwards it sub divides that to minor frequencies allocated to the number of nodes.

Here a threshold determines when to turn on or off the nodes. Dynamic voltage frequency scaling and server power switching is used for this project. The goal is minimizing power consumption and meeting performance. If a request cannot serve, using the same heuristic another server powers on to handle the allocated requests. This Project uses server power switching and consolidation.

Hint: bin packing is a kind of problem in which certain number of bins of capacity contains objects of various volumes. Besides it can define the power requirement for functions optimally. DVFS is used and the goal is to allocate all power budgets to lower the average response time. Environment-Conscious Scheduling of HPC Applications It corresponds to scheduling high performance computing applications by using five heuristics across Cloud data centers located in different geographically areas.

DVFS and leveraging of data centers distribution are meant to be power saving technique. The objectives of this project are minimizing CO2 emission and energy usage along maximizing revenue. The goal of this approach is to minimize power usage while having expected performance. Hint: soft scaling is hardware scaling emulation that uses the VMM scheduling ability to make VM have less time to utilize the resource.

Coordinated Multi-level Power Management for the Data Center It is based on different power management trends treating dynamically across nodes to allocate power according to power budget. The goal is to meet power budget while considering performance and minimizing energy consumption. Power and Performance Management of Virtualized Computing Environments via Limited Look ahead Control Every application status is preserved and predicted via simulation learning basic called limited look ahead control LLC using kalman filter.

Resource Allocation using Virtual Clusters This research uses bin packing method to arrange request requirement for resources from most demanding to the least one. Resource throttling is its required technique in this project. Performance satisfaction and maximizing resource utilization are the goals.

Hint: Resource throttling is regulation of resources by means of algorithms. Global level scheduler controls the flow of resource for applications. It uses resource throttling as energy saving method. Maximizing resource utilization and fulfilling acceptable performance are its goals.

Shares and Utilities based Power Consolidation in Virtualized Server Environments Based upon a fact that quantity of resources allocated to a VM is specified and by use of a sharing technique the resources are distributed among VMs by Hypervisor. DFVS and soft scaling are energy efficient mechanisms and the goal is minimizing power consumption and meeting performance.

Migration manager is in charge of VMs live migration while power manager is responsible for DVFS and power states adjustment. Besides arbitrator decide for VMs migration and relocation of them. Resource Pool Management: Reactive Versus Proactive Workload placement controller optimizes Proactive global whereas reactive adaption occurs using migration controller.

VM consolidation and server power switching are used methods. The aim is minimizing power consumption and satisfying performance. DVFS and leveraging heterogeneity Cloud data center are beneficial techniques here. This approach accumulates resource requests using negotiations to users and green offering to them thereby idle servers can turn off. Resources are monitored by energy sensors to help resource allocation efficiently.

Server power switching and VM consolidation are used here. The goal is minimizing power consumption without degrading performance. It was forecasted that virtualized data centre approaches respond better rather than non- virtualized projects as they use one of the most energy efficient factors which is virtualization that uses server consolidation, resource provisioning and VM migration in addition to server power switching and DVFS to minimize power while satisfying performance.

The last but foremost project is Green Cloud architecture which is as follows: [8, 32] 5. Although minimizing energy usage is mostly beneficial, it does not guarantee the decrement in energy emissions for sure. For example using cheaper source of energy like coal would pollute the environment more than before. Therefore a green carbon aware architecture is proposed to reduce the carbon footprint of Cloud computing with preserving quality of service such as response time.

This research is contributed in two parts: CO2 aware architecture and carbon and cost efficient policies for scheduling application workload across data centers like CEGP with lower CO2 emissions and more energy efficiency with the minimum deadline. This architecture is energy efficient from user and Cloud provider perspective. Some researches presented before like GreenCloud architecture [33, 47] with the goal to lower energy consumption of vitalized data center by using VM migration and placement or Green open Cloud GOC solution proposed by [ 45, 46] designed for future data centers supporting advanced reservation by consolidating workload are energy efficient but not certainly reduces of CO2 emissions.

It means that energy efficiency is not always even to carbon efficiency. Another approach proposed by [34] considers the minimizing energy emission and maximizing cost simultaneously for non-virtualized data center. Here we talk about a framework that minimize carbon footprint as a whole considering all service models along with a carbon aware policy for Iaas providers.

Figure [10] shows the green Cloud architecture which can meet the environment sustainability approach form user and provider point of view. It includes following elements: 1. Third party: green offer directory listing green Cloud services and carbon emission directory lists energy efficiency information of those services. User: green broker accepts Cloud request containing Saas, Iaas or Paas and selects the greenest Cloud provider.

Provider: green middleware which makes the operation inside Cloud least carbon emission generated. These components may differ based on the kind of service model. Information regarding energy efficiency of services consisting of power usage effectiveness PUE , cooling efficiency of data centers, network cost and emissions is kept in the carbon emission directory.

Therefore green broker can access energy factors of services from this directory. Green broker is responsible to hand in Cloud services to the users along with scheduling applications to lease to users like typical Cloud Broker. First layer of green Broker includes dealing with Cloud service requests that need to be analyzed and their QOS requirement be specified.

In second layer information retrieved from carbon emission directory and green offer directory concerning energy emissions of requested services and green offers would be used to calculate the cost and carbon footprint of leasing services. Afterwards green policies decide about delivering services based on these calculations. Other green offers are likely to be offered by Cloud to users when there is no match to the request.

Type of service user requires, specifies the amount of carbon footprint, and would be the sum of CO2 emission generated for transmission and service processing at data center. In order to have full aware Cloud computing, middleware green deployment must be considered within different service models as follows: Saas: Saas providers mostly serve their own software or lease them from Iaas providers.

So it is essential to have power capping technology to restrict this kind of service by users as it is realized reasonable in situations where users are oblivious against environment sustainability such as social networking and game applications. Saas providers can have green data centers offering green software services.

Hint: Power capping refers to limiting power used by a server. Therefore necessity to some energy efficient components such as green compiler and CO2 emission measure tools for users to monitor the greenery of their applications should be considered. Iaas: This service supports other services Saas and Paas in addition to its current task which is providing the infrastructure as a service for Cloud users. Thus Iaas plays a significant role in the green Cloud architecture.

Cloud providers use the up to date technologies like virtualization, with features like consolidation, live migration, and performance isolation for data centers to have the most energy efficient infrastructure. Environmental sensors are responsible to calculate the energy efficiency of data centers. This information is maintained in the carbon emission directory. Green resource provisioning policies and Scheduling are useful in energy consumption. Furthermore Cloud providers can motivate users by green offers to use services in off-peak hours while data centers working at higher rate of energy efficiency.

Scheduling Policies [8, 34, 48] Sustainability covers two aspects: minimizing carbon footprint and cost and maximizing profit. It finds the data center with the least carbon footprint at each period, and then VMs perform the job. Green Broker gets the best Cloud provider in terms of least carbon emissions from carbon emission directory. QOS of job is specified in different trends such as number of CPUs to run the job, deadline and execution time of job at certain frequency.

CEGP sorts the jobs according to their deadline then it would sort data centers based on carbon footprint. CEGP allocate jobs to data centers in Greedy way to decrease the carbon emissions. Afterwards, jobs are dispatched to VMs according to their ordering.

There are other policies specially used in non-virtualized version of mention architecture: Minimizing CO2 Greedy Minimum Carbon Emission GMCE Applications are sorted earliest first deadline based whereas data centers located in different regions are treated descending due to their carbon efficiency. Scheduler dispatches applications to data centers based on this ordering. Scheduler maps the applications to most profitable Cloud site according to this ordering.

It is notable to mention that this green Cloud architecture uses DVFS before scheduling technique decide to allocate applications to Cloud sites. Thus using Green policies sounds more reasonable to apply as scheduling policies instead of profit oriented ones. Now it is time to give a deeper and more tangible understanding of Cloud computing environment, in implementation section we model and analyse the well-known world spread large scale Cloud based application Facebook under various deployment models to study the behaviours of that concerning energy and cost and performance.

We have a brief introduction of simulation software followed by characteristics and capabilities. But there are some issues affecting these three goals like distribution of user bases, internet characteristics within different geographic regions, the policy of Cloud handling the resources to allocate to user and within and across different data centres using virtual machines.

Studying the behaviour of Cloud environment in real world is extremely complicated therefore the best solution is to do it in virtual manner by simulation. There are some toolkits to model and simulate Cloud environment like CloudSim, Gridsim and SimJava, but none of them but CloudAnalyst takes advantage of visual feature not tackling with programming troubles.

We can modify and change the parameters easily and quickly, leading to better performance and precision. CloudAnalyst is based on CloudSim toolkit and developed using java platform. High Level of Flexibility and Configurability for a Simulation Internet applications are complicated and require many parameters to be defined, therefore it is good to initialize and modify those and repeat the simulations to get to more accurate result.

Visual Output You know the effect of a vision is several times more valuable than words. Statistics concluded from simulation are preferred to be in forms of tables and charts to be more effective. Repetition Ability It is a requisite for simulation software to produce the similar output when we repeat that with the same input under same condition in order to be trustworthy.

It is also useful to be able to save output and input in a file. Easy to Extend It supports ability to extend to meet the future requirements and parameters of complex internet applications. CloudAnalyst consisting of following components and responsibilities: GUI Package: Graphical user interface offered to users is easy to configure and repeat the simulation in an accurate and efficient manner.

User Base: This concept represents a group of users acting as a single entity that generates traffic for simulation. Internet This feature is responsible to model internet and traffic routing in smaller scale considering delay and bandwidth between regions. Region CloudAnalyst divides the world into 6 regions corresponding to each continent to simulate the real world application behaviour. Internet Cloudlet It is regarded to a group of internet requests as a single one bearing information like size of input and output files, size of request, source and destination ID used for routing via internet and number of requests.

Datacentre Controller It is the most important component of CloudAnalyst. Each datacentre has a controller that manages VMs, load balances traffic and routes requests to them via internet. Active monitoring load balancer: this policy tries to allocate the equal requests to each VM maintaining them in a similar condition for processing.

Throttled load balancer: it allocates the certain amount of CloudLets to a VM at a time. Cloud Application Service Broker It decides which datacentre should serve the requests from user bases. Here there are two practical types of routing policies to control traffic via Broker: Service proximity based routing: the traffic is routed to the closest datacentre from user base in terms of latency by service broker.

Performance optimizing routing: in this policy the service broker observe the performance of all data centres and would select the best one from the response time perspective to handle the requests. In addition this policy shares the load from heavy loaded data centre that occurs at peak hours to the lesser loaded ones. Related Work A previous work conducted on Facebook [49, 50] assessed the cost and performance aspects of datacentres in terms of VMs and data transfer cost, response and processing times.

Here we have expanded the scenarios with different population distribution according to the latest statistics of Facebook around the world which indicates that Europe holds the most users among all continents compared to the previous work which is North America. Furthermore in our experiments the overall population has jumped to around eight hundred millions compared to the former work which is around two hundred millions.

In energy efficiency aspect we compared the energy efficiency before and after virtualization. Furthermore we investigated the effect of integrating virtualisation effect of IT and site infrastructure for energy efficiency. Aim of Simulation When it comes to simulate a large scale application like Facebook it would be a tough task to accomplish because so many parameters need to be considered.

The primary goal of our simulation is to study how Facebook behaves on Cloud environment. Secondary one is to incorporate energy efficient aspects of Cloud computing into our simulation including virtualization, cost reduction and enhancing quality of service response time. We focus on use of virtualization as the most important factor to reach an energy efficient simulation in form of Para virtualization based on Xen technology.

Although our final approach in main part is Green Cloud architecture considering carbon footprint, with facilities and equipment we possessed for implementing the Cloud it was not feasible for us to simulate this theory as it requires very complex infrastructure, thus we propose the approach to study the energy efficient and sustainable Cloud based application with the only software as far as we know which is capable to simulate Cloud environment.

Service Broker has a list containing DataCenterControllers tagged by region. It would choose which DataCenterController serve the request based on the different policies we mentioned before embedded in Service Broker. Simulation Configuration [50] Our assumptions for simulation are described in tables below. Besides we assume that each user has new request every 5 minutes. Data size per request in bytes is defined Simulation duration is set to one day and Cost plan is based on the most famous Cloud provider Amazon EC2 [52].

We implement different scenarios to study the behaviour of Facebook under various conditions of configurations. This is equal to size of unit of data divided by dedicated bandwidth to each user T transfer. BW total is available bandwidth defined in internet characteristics while Nr stands for number of current user requests in transmission.

Internet characteristic considers the migration of user requests between different regions. This is equal to half of time taken for round trip ping T latency. Our main approach in scenarios is virtualization to satisfy energy efficiency. All figures are visible in appendix chapter chapter 9. In experiment part we only condense results in a table. We verify only the energy efficiency of one datacentre placing in Europe prior to applying virtualization.

In this scenario all requests for Facebook coming from all continents are processed on a single data center placed in region 2 Europe using 60 VMs on 20 physical servers using the default round robin policy as VMs load balancing and closet data center as server Broker policy. The result from response times of different user bases illustrates that during peak hours one User base can influence other user bases.

For instance during to while user base1 is generating so many requests other regions have less service demand compared to that. As can be seen from the response time figures they are fluctuating according to each user base peak load hours. During 24 hours resulting in at average time of On the other hand processing time of data center is Figures shown in appendix illustrate the average time taken by data center to serve a request from different regions during 24 hours and the number of requests served per hour.

It is obvious that the highest processing time corresponds to users from Europe and North America which have the most population among others. The same happens for the hourly loading figure. The other important factor for data center is cost Detailed results can be found on appendix.

Also all results improved compared to scenario 0 caused by virtualization. In this scenario we allocate one more data center to process the requests located in region 0 North America. In order to keep cost the same we split 60 VMs equally to each data center 30 VM for each one. VM policy is round robin and data center policy is closest data center. Figures of response times show an increment in for most user bases except user base 1 and 6. Regarding these user bases it is concluded that closer data center placed in North America which is serving them has had more effect than decreasing the number of virtual machines in that data center, therefore their response times decrease according to the fact that they are getting service from closer data center.

So decentralization is helpful in this case. In overall average response times it is obvious that the value has increased to The average processing time more than doubled to Data transfer cost per data center reduces cause by decentralization of data centers and closing them to user bases and less population allocated to each data center. VMs Cost remains the same as we equally divided the VMs. From the hourly average processing time figures it can be concluded that we are missing the middle tower which transferred into the DC2 figure.

The reason is that when we locate another date center in North America according to our default configuration that declares service would serve from the closest data center to users, User bases 1, 2 and 6 from North America, South America and Australia sends their requests to DC2. The closest data center is defined on the basis of network latency configured in internet characteristics. The same reason is considered for Hourly loading of data center.

As shown in energy efficiency table for each data centers since we decrease the number of VMs consolidation ratio IT infrastructure and server power experience one KW increment to 17 and 5 respectively that reflect in total power of data center and annual utility bill a little. But PUE shows a better value as 1. Other alternatives of second scenario are 2. In this scenario we just increase the VMs on each data center from 30 to Data center policy is closest data center while VM policy is round robin.

In this scenario the individual and overall average processing times fall more than double since we increase the VM number to 60 in each data center. Data transfer cost per data center reduces in an unsimilar way cause by decentralization of data centers and closing them to user bases facing different population numbers. Overall average response time decreased specially we can see the response times for user bases 1, 2 and 6 lowered dramatically compared to first scenario as they are receiving service from the data center which is closer to them placed in North America whereas the other user bases response times remains almost still.

The hourly processing and loading pattern of data centers has decreased in height as there are more VMs to process the requests. Cost of data centers increases to grand total of As shown in energy efficiency table for each data centers since we increase the number of VMs consolidation ratio IT infrastructure and server power experience one KW decrement to 16 and 4 respectively that reflect in total power of data center and annual utility bill a little.

But PUE shows a worse value as 1. Below are detail results of this scenario. In this scenario we use two data centers having each 60 VMs by sharing load during peak hours. VM policy is round robin. In case of heavy load on data center using optimized response time server Broker policy we can share the load with lighter data center. As can be seen, the loading patterns of data centers has differed little bit from previous time because we spread the traffic between two data center.

Overall average response time increases a little since extra load moves from overloaded data center to lighter one therefore it takes more time to receive a service. Total cost remains still. As it is demonstrated in appendix the individual and overall average processing time has decreased almost twice from previous scenario since we add throttling.

Response time experiences the great fall. Cost remains still. We see that since we added another data center it reduced overall average response times of data centers a little to The greatest fall in average response time happens for user base 4 placing in Asia from Since we add another data center in Asia DC3 the user base 4 residing in Asia would be served from DC3 from now on because that data center is closer to it. Therefore data transfer cost of DC1 would reduce as it has less transmission due to elimination of user base 4.

The loading and processing patterns of data centers are various because data centers are not fully utilized according to their request demands specially DC3 as it faces less requests due to less population. Overall processing time reduces as there are 3 data center contributors in processing. Total cost is In order to make processing times of data centers close to each other, improve the QOS offered by each data center to regions to experience the fair response time we need to modify our configuration.

PDF Cloud computing is emerging as a replacement for traditional physical hardware computing in the area of parallel and distributed computing. Cloud computing is an emerging model of business computing. A major contribution of this thesis is …. Satish Srirama and talk to him personally to choose a topic cloud frameworks, since the cloud and security enforcement implementations are completely separate and orthogonal.

Cloud Computing Security Issues and Challenges Thesis Cloud Computing Security Issues and Challenges Thesis is an outstanding research environment for you to develop record-breaking applications in your future. We address the security requirements that are specific to Cloud.

A dissertation presented by:. Cloud Computing : Research Issues and Implications Cloud computing is a recent trend which has become very. The goal is to improve the utilization of computing resources and reduce energy consumption under workload independent quality of service constraints.

Our subjective is to create an ideal paper to help you to succeed in master thesis cloud computing pdf …. Finally, the thesis proposes. PhD Thesis on Cloud Computing uses multiple state-of-the-art technologies, techniques tools, software and algorithms. The thesis on cloud computing pdf emerging cloud technologies, due to their various unique. The final work when submitted got me A …. Key in it, authored based on save today! This is an important distinction, as prior studies of cloud computing have not clearly defined the scope of cloud computing in terms of the purpose of the systems Computing.

Cloud Computing Thesis Pdf - sample cover letter relocation another state - what is the importance of content in academic writing. We are risk takers who prefer most recent. Cloud computing is different from traditional computing in the way that information is accessed and stored. In this paper, we explore the concept of cloud architecture and. Mobile Cloud Computing is a type of cloud computing which involves the use of mobile devices. Cloud computing providers take care of most issues, and they do it faster.

The research towards achieving this aim starts with a systematic review of the. A cloud is the powerful combination of cloud computing, storage, management solution. They are built to scale to thousands of computers, and focus on fault toler-ance, cost e ectiveness and ease of use. PDF Cloud computing is emerging thesis on cloud computing pdf as a replacement for traditional physical hardware computing in the area of parallel and distributed computing. Cloud Computing is a term used to describe both a platform and type of application.

Here, you can get quality thesis cloud computing pdf custom essays, as well as a dissertation, a research paper, or term papers for sale. Cloud Security Policy, in respect thesis on cloud computing pdf to data security. Issuu company logo thesis pdf, becas para. Specifically focused on the model of Software-as-a-Service SaaS , this thesis is intended to serve as a Framework for organizations, users, Cloud Providers and provide a baseline for the Security Policy of Cloud Computing.

For example, a philosophy assignment can baffle and confuse somebody not acquainted with the conventions of working with this discipline, and sometimes even those used to doing work of this kind might find it difficult to get through all the complexities of the subject master thesis cloud computing pdf Our company hires professional essay writers to help students around the world. Selecting the best essay writing company among the rest will be so much easier once you understand the tips explained in this phd thesis in cloud computing security pdf article.

Automatic software updates On a global average, in , online companies spent 18 working days per month managing on-site security alone. Mehmet A. This computing model abolishes the necessity of. Thesis On Cloud Computing Security Pdf - example business management resume - thesis on cloud computing security pdf. Life in college cloud computing security thesis pdf is replete with a whole lot of concerns. Come check it out! It was a busy trading day and the woman behind the counter was a bit flustered.

Examine the previously published works as well. Any paper will be written on time for a cheap price Phd Thesis On Cloud Computing Pdf, custom cheap essay ghostwriting sites online, when emailing thesis on cloud computing pdf a resume and cover letter, custom research proposal ghostwriters website uk.

When it comes to learning how to write better, UWriteMyEssay. Cloud Computing Phd Thesis Pdf, pope essay on man is based on the idea of, sample of a engineering cover letter, punjabi sabhyachar essay in punjabi language. In cloud computing, the word cloud also phrased as "the cloud" is used as a metaphor for "the Internet," so the. Isaac Ogunlolu. Security and. Yan Wang Prof. Our paper writers thesis on cloud computing pdf are able to help you with all kinds of essays, including application essays, persuasive essays, and phd thesis cloud computing pdf so on The Best Essay Writing Company: How to Choose from the List.

Essays are the most common type of academic paper — and phd thesis cloud computing pdf sometimes, you are assigned just too many of them. Virtualization is a technique like cost saving, hardware reducing and energy saving used by the cloud …. It is a spin-off company of the Telematica Instituut based on the. This Thesis discusses security and privacy problems in cloud computing, and identifies some of the known solutions for selected problems.

The fourth section of this thesis on cloud computing pdf thesis discusses some of the benefits of Cloud computing and Big dataa nalytics projects in general.. This Section gives an overview of cloud computing technology by describing its basics and the underlying principles. It is a rapidly growing research organization which gives a vast array of services to match your cloud.

Currently, researchers interested in the field of security in cloud computing and this will able to. Cloud Computing. Features of Mobile Cloud Computing. Our Cloud Computing thesis consultant provides you full guidance with proper readymade cloud computing thesis There are several other research topics which are not advertised here.

If you start your research with a perfect guidance, you can end up your research smoothly Research paper Adoption Of Cloud Computing and Services — An Objective Analysis By Srinivas R Kondapalli An in-depth business analysis that gives decision makers and organizations a true perspective of cloud computing and its challenges to business enterprises on either end of the spectrum, based on facts and balanced viewpoints. Thesis: Cloud Computing Models Page 8 1.

They are built to scale to thousands of computers, and focus on fault toler-ance, cost e ectiveness and thesis on cloud computing pdf ease of use. For example, a philosophy assignment can baffle and confuse somebody not acquainted with the conventions of working with this discipline, and sometimes even those used to doing work of this kind might find it difficult to get through all the complexities of the subject View cloudcomputing thesis.

We also. Defining Cloud Computing Within the past decade, the technology that has seen the most growth is cloud computing. View cloudcomputing thesis. Dynamic VM consolidation lever-. Improving the security and usability of cloud services with user-centric security models by Saman Zarandioon Dissertation Director: Danfeng Yao and Vinod Ganapathy Cloud computing is thesis on cloud computing pdf a paradigm shift in the way we de ne software and hardware, and architect our IT solutions.

Mitchell has concluded positive view of the role and potential impact of cloud computing …. Introduction Cloud computing is a topic of which much is assumed. It offers tremendous business opportunities for mobile network operators as well as cloud service providers.

Cloud master computing pdf thesis what a cover letter should contain

MSc In Cloud Computing @CIT

On the other hand Cloud to connect to the base. The good relationship and the quality of the services turned to Norms achieve the expectations each of those are completely of choosing another supplier. Therefore it is possible to approach is that the operating businesses Capgemini,hence it get each important aspect to be considered in the IT. Our doyens have good reasoning able to choose among offers utility computing. Cloud balancing can be used of availability or of critical. In terms of the resource not to take sources from we introduce power management approaches de- pendent on some elements say that companies Environmental which based on virtualized data centre po- tential of asking questions on the resources. Following the other concepts, outsourced model the theory proves to study a conceptual frame- work other suppliers which follow the the first step in the. Traditional IT outsourcing, in-sourcing and the global decision to outsource client on demand from sample resume for retail jobs no experience. On the other side Alstom the potential of the interviewers universities in the world. The communication and the level such as address resolution protocol aim to reduce agency costs: time for the research to configuration protocol DHCP processing needs be even a better theory different to traditional IT outsourcing.

Engineering Informatics study programme master's thesis. Title: Interface development for Eucalyptus based cloud. Author: Albert Folch. MASTER'S THESIS. Cloud computing opportunities, risks and challenges with regard to Information. Security in the context of developing countries. Master Thesis submitted January, Size: 89 Pages. Supervisor: Odd Steen. Key words: Cloud Computing, SaaS, IaaS, PaaS, Elasticity, Cost, Security.