Use Wufoo integrations and get your data to your favorite apps.

Category: OpenStack

There are 2 posts published under OpenStack.

Easily Get Your Open Source Framework Onto Openstack

The number of open source frameworks that are available today is continuously growing at an enormous pace with over 1 million unique open source projects today, as indicated in a recent survey by Black Duck.

 

image001

 

The availability of these frameworks has changed the way we build products and structure our IT infrastructure. Products are often built through integration of different open source frameworks rather than developing the entire stack.

 

The speed of innovation is also an area that has changed as a result of this open source movement. Where, previously, innovation was mostly driven by the speed of development of new features, in the open source world, innovation is greatly determined by how fast we can integrate and take advantage of new open source frameworks.

 

Having said that, the process of trying out new open source frameworks can still be fairly tedious, especially if there isn’t a well-funded company behind the project. Let me explain why.

 

Open Source Framework Integration Continuum

 

The integration of a new open source framework often goes through an evolution cycle that I refer to as the integration continuum.

 

image003

 

A typical integration process often involves initial exploration where we continuously explore new frameworks, sometimes without any specific need in mind. At this stage, we want to be able to get a feel of the framework, but don’t necessarily have the time to take a deep dive into the technology or deal with complex configurations. Once we find a framework that we like and see a potential use for it, we start to look closer and run a full POC.

 

In this case, we already have a better idea of what the product does and how it can fit in our system, but we want to validate our assumptions. Once we are done with the POC and found what were looking for, we start the integration and development stage with our product.

 

This stage is where we are often interested in getting our hands on the API and the product features, and need a simple environment that will allow us to quickly test the integration of that API with our product. As we get closer to production, we get more interested in the operational aspects and need to deal with building the cluster for high availability, adding more monitoring capabilities and integrating it with our existing operational environment.

 

Each of those steps often involves downloading the framework, setting up the environment and tuning it to fit the need of the specific stage (i.e. trial, POC, dev, production). For obvious reasons, the requirements for each of those stages are fairly different, so we typically end up with a different setup process with various tools in each step, leading to a process in which we are continuously ripping and replacing the setup from the previous step as we move from one stage to the next.

 

The Friction Multiplier Effect

 

The friction that I was referring to above refers to a process for a single framework. In reality, however, during the exploration and POC stages, we evaluate more than one framework in order to choose the one that best fits our needs.

 

In addition, we often integrate with more than one product or framework at a time. So in reality, the overhead that I was referring to is multiplied by the number of frameworks that we are evaluating and using. The overhead in the initial stages of exploration and POC often ends in complete waste as we typically choose one framework out of several, meaning we spent a lot of time setting up frameworks that we are never going to use.

 

What about Amazon, GAE or Azure Offerings?

 

These clouds limit their services to a particular set of pre-canned frameworks and the infrastructure and tools that most clouds offer are not open source. This means that you will not be able to use the same services under your own environment. It also introduces a higher degree of locking.

 

Flexibility is critical as the industry moves toward more advanced development and production scenarios. This could be a huge stop gap. If you get stuck with the out of the box offering, you have no way to escape, with your only way out to build the entire environment yourself, using other sets of tools to build the entire environment completely from scratch.

 

A New Proposition

 

Looking at existing alternatives, there is a need for a hassle-free preview model that enables a user to seamlessly take a demo into production with the flexibility to be fully customizable and adaptable. With GigaSpaces’ Cloudify Application Catalog, we believe that the game is changing. This new service, built on HP’s OpenStack-based Public Cloud, makes the experience of deploying new open source services and applications on OpenStack simpler than on other clouds. By taking an open source approach, it’s guaranteed not to hit a stop-gap as we advance through the process and avoid the risk of a complete re-write or lock-in. At the same time we allow hassle free one-click experience by providing an “as-a-service” offering for deploying any of our open source frameworks of choice. By the fact that we use the same underlying infrastructure and set of tools through all the cycles, users can take their experience and investment from one stage to the other and thus avoid a complete re-write.

 

Final Words

 

In today’s world, innovation is key to allowing organizations to keep up with the competition.

 

The use of open source has enabled  everyone to accelerate the speed of innovation, however, the process of exploring and trying open source frameworks is still fairly tedious, with much friction in the transition from the initial exploration to full production. With the new Catalog service many users and organizations can increase their speed of innovation dramatically by simplifying the process of exploring and integrating new open source frameworks.

417

10 Mistakes to Avoid in Planning for OpenStack Cloud

Build your own cloud? With free open source? Too good to be true?  Not quite. 

 

OpenStack provides you with the opportunity to make resources available to your people by creating your own cloud without paying huge software license fees — that’s true.  But in the years we at Mirantis have been building and deploying production of OpenStack cloud environments , we’ve seen a lot of magical thinking. If you watch out for these ten common mistakes, you’ll be moving in the right direction.

 

 

Mistake #10: It’s open source. Who needs a budget?

 

Very often we hear, “Why do we need a budget for this? We’ll just implement the code off the repo. There’s no license fee.”

 

That last bit is true. There’s no license fee to run OpenStack, but open source software doesn’t just appear out of nowhere, especially for a project as large and complex as OpenStack. Hundreds of people get paid to work very hard to improve the code, which changes constantly, so the latest version of one component requires you to bring in the latest version of everything else.

 

The problem here is that the latest code is always unstable, and critical bug fixes happen at the speed of the community, not yours. Sometimes you’re going to need to pay someone to fix your bugs in your timeframe and not the community’s. Thus, open source code is free at any given moment in time, but there’s always change over time, and time is money.

 

Mistake #9: I can do it by myself.

 

If your entire cloud is small enough to fit on your laptop, you might be able to do it yourself.  If you’re looking at a medium or large cloud, however, get some help.  Most people implement clouds for reasons that aren’t simple; you must understand what everyone else needs — not just you — in order to do this right. Explicitly document your use cases so that you can figure out whether you need a public, private, or hybrid cloud. Is your workload multitenant, long-running, short-running, dedicated, ephemeral, stable, bursty, or maybe even all of the above?

 

Maybe the solution to your problem isn’t even in the cloud. Look at your legacy applications. Do they belong in the cloud, or do they need to continue on your existing infrastructure?

 

None of these decisions can be made in a vacuum.

 

Mistake #8: Everyone understands the terminology.

 

You may believe that everyone understands the terminology, but it is critical to understand the whos, whats, whys, whens, wheres, and hows — collectively. Consider the following sentence we heard in a planning meeting:

 

We built a service to support the service, but when we had problems with the service level, we called services.

 

Really?  Take the time to understand what your constituents mean in precise terms, because there is no common understanding — even with common words. There’s a lot of chatter on the OpenStack forums about the actual meaning of the word “type.”

 

Mistake #7: Assuming legacy systems will go away (or be migrated).

 

There’s a reason COBOL programmers still have jobs. Legacy applications don’t just go away; that’s just the reality.  Just last week, a hyper-enthusiastic system admin told us, “We’re just going to build a cloud and move everything over.” Maybe it’d work, but not quickly. Some legacy systems, such as certain data storage, transactional, financial, and insurance applications are just not ready to move to cloud, especially if business rules have not been well-documented.

 

Mistake #6: All you need is load balancing.

 

This particular fallacy comes from thinking of the cloud as a giant router that just shifts stateless traffic to where it can run the fastest. Think about what workloads you’re moving to the cloud. Is it a dev-test environment? Can you ramp up or ramp down? Can you shut it down in an emergency? Do you need a single component or multiple components? In most cases, you can’t scale an application just by cloning its elements; not all constituent services can maintain consistency across replicas unless they’re architected from the get-go.

 

Mistake #5: We don’t need to talk to the developers.

 

In OpenStack cloud, applications can exercise far more control over the platform they run on than in a conventional environment, but with great control comes great responsibility. Operations and developers need to play nice with each other —you need expertise at the developer level to, both, properly architect and operate the solution.

 

That’s not to say that admins and operators should just get out of the way. You’re building an IaaS cloud with OpenStack so developers can use the infrastructure more easily. You want to give the developers just enough choices to make them successful, so create a menu — don’t give them the run of the kitchen.

 

Mistake #4: Our staff has the skills.

 

We often hear, “Our staff has the skills. It’s just like Linux.” Sure, if your organization is endowed with source-code masters for IP networking, hypervisor resource management, storage redundancy and optimization, source-code management, security and encryption, driver optimization, distributed application architectures, and any number of other technologies involved in OpenStack, you may be right.  Chances are, though, you’re missing one or many of these skills, and your staff needs to know that.

 

Everyone can use Linux, but not everyone’s a kernel engineer. Ultimately you can become the source-code master who knows everything, but it’s not overnight.

 

Mistake #3: We can monetize later.

 

“Cloud introduces great efficiencies.  It’ll pay for itself.”  Go ahead and try to get that past the CFO.

 

More than likely, you’re going to need new hardware and it’s not going to be the light and cheap stuff. And smart people don’t work for nothing. You’re going to need to train the people who don’t know everything they need to know. Oh, and do you also have a vacant, water-cooled data center nearby?

 

You may also need a new business model. Your existing infrastructure was funded with a set of assumptions about utilization by different functions and business owners – assumptions that likely no longer hold true. Where are your users going to get the money to support the cloud?

 

Understand your users’ economics and you’ll understand the value of your cloud.

 

Mistake #2: The cloud fixes itself

 

A cloud is not auto-magical, but, with the right monitoring and maintenance, it can, indeed, fix itself — sometimes.  But you need to make sure you have the right monitoring and the right redundancy, especially for alerting when capacity thresholds are near. You might not know it’s broken until it doesn’t fix itself; then you’re going to get that 3:00 A.M. call, and you must be prepared.  Remember that engineer who knows everything?  She’s not always going to be available.

 

Mistake #1: Failure is not an option

 

Finally, there’s the romantic old-school belief that failure is not an option. In fact, when it comes to cloud, failure isn’t just an option — it’s a core design principle.  Fail often and fail fast, so you can move quickly. Just make sure that your systems and applications are prepared for the moment when something goes down and they have to adjust.

 

“Automate all the things,” as the saying goes — especially the notification of failure.  This way, you can really enjoy this new technology as your systems keep going even when something doesn’t go quite according to plan. That’s when your organization gets the maximum benefit from the cloud.

 

Isn’t that why you wanted OpenStack in the first place?

1146