Use Wufoo integrations and get your data to your favorite apps.

Category: Open source

There are 12 posts published under Open source.

The Culture of Open Source

When you hear open source software, what do you think of? Software developers hunched over keyboards, fueled by sugar and caffeine, or a recent request by your IT department to try new “free” software? It’s both of those of things, and much, much more.

Behind open source, there is a strong culture where presentation of patterns and models for debate is championed. Open source may have started as a description for software source code and a development model, but it has moved far beyond that. It is the challenge to approach the world in an innovative way, looking for solutions that break from tradition, and doing so in a collaborative environment where transparency of process is the most important virtue.

You may not be certain what open source is, but you use it, and it is everywhere. Successful modern enterprises have come to rely on open source software (OSS.) In the realm of infrastructure, open source provides benefits on the axes of cost, quality, speed, and risk mitigation through the support of a wide community of contributors. In addition, these organizations have found it can avoid getting too far ahead of the pack, by opening up their innovations to collaboration with others in the market quickly. The list of names is long, with proponents in the ranks of hardcore tech firms like RedHat and Cloudera, but also popular businesses such as Facebook, LinkedIn, and Netflix.

Netflix has become known for not just streaming movies, but by providing its suite of in-house developed cloud management tools as free open source components. Organizations as large as IBM have heralded Netflix for this move as it has accelerated their ability to innovate. Netflix’s cloud development solutions, such as ChaosMonkey for cloud testing and Asgard for deployment, are forcing cloud leaders like Amazon to push the boundaries of their offerings, lest they lose ground to newer competitors.

Without open source, LinkedIn would not have gained its success and become a social networking platform used by nearly every professional. Utilized as a big data analytics tool, open source is allowing LinkedIn to dominate its space. This is not done blindly, as the company states open source helps it in “reaching out to the best, brightest, and most interested individuals to explore what’s new and help us further build our components.” This is despite that LinkedIn included in a 2011 registration filing its risks associated with OSS, specifically related to potential exposure to open source-licensing claims. These issues signify that while there are serious risks that can come with improper management of OSS licenses, for businesses these can be a tradeoff with the rewards of being early to market.

Recently ThoughtWorks, the company I work for, announced that its Go software will now be freely available under the Apache 2.0 open source license. This was done as a conscious display of ThoughtWorks’ desire to continue to be a recognized innovator next to the other cutting-edge, global solution providers. We understand that open source simply creates better software, creating an environment of collaboration where the best technology is developed and everyone wins. It is through the open source model that higher-quality; more secure, more easily integrated software is created, and done at greatly accelerated pace, often at a lower cost.

ThoughtWorks embraced open source development in its early days, turning to agile methodology pioneers Ward Cunningham and Martin Fowler (current ThoughtWorks chief scientist) in 1999 to help guide a stalled project. During that endeavor, many innovations that would later make their way into the popular Framework for Integrated Test (FIT) were developed. More notably, however, was the development of a piece of software based upon the extreme programming practice of continuous integration. CruiseControl from ThoughtWorks, debuted within the following year, was the first continuous integration server in the market that led the way for nearly a decade as engineering teams began to embrace the practice as a standard.

Fundamentally, the reason why open source has flourished and produced incredible innovations in technology is the culture it brings to the table. With OSS, organizations can continuously improve and deliver quality software. Today’s hypercompetitive business environment requires rapid innovation and maintaining a steady flow of the most important work between all roles. A free-flowing, collaborative, and creative environment provides the possibility to get ahead of the competition and make the software release process a business advantage.

So, what is open source? It is the ability to revolutionize processes and industries, and create positive change inside and out.

2506

Easily Get Your Open Source Framework Onto Openstack

The number of open source frameworks that are available today is continuously growing at an enormous pace with over 1 million unique open source projects today, as indicated in a recent survey by Black Duck.

 

image001

 

The availability of these frameworks has changed the way we build products and structure our IT infrastructure. Products are often built through integration of different open source frameworks rather than developing the entire stack.

 

The speed of innovation is also an area that has changed as a result of this open source movement. Where, previously, innovation was mostly driven by the speed of development of new features, in the open source world, innovation is greatly determined by how fast we can integrate and take advantage of new open source frameworks.

 

Having said that, the process of trying out new open source frameworks can still be fairly tedious, especially if there isn’t a well-funded company behind the project. Let me explain why.

 

Open Source Framework Integration Continuum

 

The integration of a new open source framework often goes through an evolution cycle that I refer to as the integration continuum.

 

image003

 

A typical integration process often involves initial exploration where we continuously explore new frameworks, sometimes without any specific need in mind. At this stage, we want to be able to get a feel of the framework, but don’t necessarily have the time to take a deep dive into the technology or deal with complex configurations. Once we find a framework that we like and see a potential use for it, we start to look closer and run a full POC.

 

In this case, we already have a better idea of what the product does and how it can fit in our system, but we want to validate our assumptions. Once we are done with the POC and found what were looking for, we start the integration and development stage with our product.

 

This stage is where we are often interested in getting our hands on the API and the product features, and need a simple environment that will allow us to quickly test the integration of that API with our product. As we get closer to production, we get more interested in the operational aspects and need to deal with building the cluster for high availability, adding more monitoring capabilities and integrating it with our existing operational environment.

 

Each of those steps often involves downloading the framework, setting up the environment and tuning it to fit the need of the specific stage (i.e. trial, POC, dev, production). For obvious reasons, the requirements for each of those stages are fairly different, so we typically end up with a different setup process with various tools in each step, leading to a process in which we are continuously ripping and replacing the setup from the previous step as we move from one stage to the next.

 

The Friction Multiplier Effect

 

The friction that I was referring to above refers to a process for a single framework. In reality, however, during the exploration and POC stages, we evaluate more than one framework in order to choose the one that best fits our needs.

 

In addition, we often integrate with more than one product or framework at a time. So in reality, the overhead that I was referring to is multiplied by the number of frameworks that we are evaluating and using. The overhead in the initial stages of exploration and POC often ends in complete waste as we typically choose one framework out of several, meaning we spent a lot of time setting up frameworks that we are never going to use.

 

What about Amazon, GAE or Azure Offerings?

 

These clouds limit their services to a particular set of pre-canned frameworks and the infrastructure and tools that most clouds offer are not open source. This means that you will not be able to use the same services under your own environment. It also introduces a higher degree of locking.

 

Flexibility is critical as the industry moves toward more advanced development and production scenarios. This could be a huge stop gap. If you get stuck with the out of the box offering, you have no way to escape, with your only way out to build the entire environment yourself, using other sets of tools to build the entire environment completely from scratch.

 

A New Proposition

 

Looking at existing alternatives, there is a need for a hassle-free preview model that enables a user to seamlessly take a demo into production with the flexibility to be fully customizable and adaptable. With GigaSpaces’ Cloudify Application Catalog, we believe that the game is changing. This new service, built on HP’s OpenStack-based Public Cloud, makes the experience of deploying new open source services and applications on OpenStack simpler than on other clouds. By taking an open source approach, it’s guaranteed not to hit a stop-gap as we advance through the process and avoid the risk of a complete re-write or lock-in. At the same time we allow hassle free one-click experience by providing an “as-a-service” offering for deploying any of our open source frameworks of choice. By the fact that we use the same underlying infrastructure and set of tools through all the cycles, users can take their experience and investment from one stage to the other and thus avoid a complete re-write.

 

Final Words

 

In today’s world, innovation is key to allowing organizations to keep up with the competition.

 

The use of open source has enabled  everyone to accelerate the speed of innovation, however, the process of exploring and trying open source frameworks is still fairly tedious, with much friction in the transition from the initial exploration to full production. With the new Catalog service many users and organizations can increase their speed of innovation dramatically by simplifying the process of exploring and integrating new open source frameworks.

417

Decentralization: Key to Bitcoin’s Success

Bitcoin is now becoming a household name as, once again, the price of each coin is nearing 200 USD and more companies are selecting to accept payment in this convenient, yet new payment option. With the recent closure of Silk Road and the US government’s temporary shutdown, Bitcoin has proven to be a force to be reckoned with.

 

What makes Bitcoin unique? It is a digital, decentralized, cryptocurrency and a gateway to send money anywhere around the world with just the click of a mouse or the scan of a QR code to anyone with internet access. Proponents of Bitcoin pride themselves in supporting the lead cryptocurrency, which is based on peer to peer transactions. Yet, one must never take the decentralized nature of Bitcoin for granted.

 

Unfortunately, any system can trend back towards centralization if not carefully monitored and safeguarded. As humans, we can, unfortunately, move towards centralized structures for an, often times, false sense of “security” and “certainty.” Centralization tends to create more problems than naught when the power to make decisions falls into the hands of a select few. History provides evidence of the tremendous failure of central planning with the collapse of the Soviet Union, massive starvation and lack of development in North Korea, and, right now, a bloated US Government with a growing bureaucracy with an accompanied skyrocketing debt. On a US-centric note, how can a Washington bureaucrat really know and meet the needs of a school teacher in California or a farmer in Iowa? In the end of the day, each individual knows what is best for his/her needs and with systems and societies based on greater individual responsibility; in turn, systems that are more diverse will meet more needs than blanket solutions and inefficient “one-size fits all” policies. So, while there is comfort in having to be responsible for less in the short-run, there are too many unintended long term consequences.

 

Bitcoin is not just a currency, but a movement towards decentralization in society, government, financial systems, and thought. It has served as a catalyst to provide power to individuals seeking to separate themselves from constricting governing bodies and failing central banks. To date, Bitcoin has been characterized as an open source project prompting discussion of the role of money in society, the danger of current banking structures, and the creative decentralized solutions to centralized problems in society today.

 

A model of organized decentralization is key to the success of Bitcoin around the world. Decentralization does not equate to chaos, but many individuals at work to promote a project with the common goal of strengthening a project and not placing trust in one entity. The best example of organized decentralization would be grass roots activism and organization within the Bitcoin community instead of central actors making all major decisions and protocol adjustments. Each member of the Bitcoin community serves a unique role and has a distinct voice and part to play in keeping the Bitcoin currency on a pathway to success, growth in value and utility, and a global reach. Most members of the Bitcoin community already recognize that the larger the user base, the less volatile the currency, and the greater legitimacy lent to the currency.

 

Open source software is also vital to the preservation and strengthening of Bitcoin. With decentralization in the development community, there is greater room and leverage for an increased number of individuals to enter the community and contribute. The question, then, may arise as to how the Bitcoin Qt client can remain decentralized. An option which has prompted growth in Bitcoin development is when businesses have given back to the Bitcoin community through hiring individuals to solely work on the Qt client. There is value in the marketplace of ideas as trial and error and diversity of thought inspires the development of the strongest wallets, payment processors, and software. There is too much risk in entrusting all development and protocol to a few. The Bitcoin community must continue its commitment to the open-source ideals that make Bitcoin the resilient system it is today.

 

As Bitcoin continues to increase in value and central banks continue to disappoint, it is clear that decentralization leads to healthier societies and global financial success. What will become of Bitcoin? To date, Bitcoin is the most successful cryptocurrency and is still in earlier stages of development. What we do know: a model of organized decentralization is vital to the success of the Bitcoin currency and movement.

357

10 Mistakes to Avoid in Planning for OpenStack Cloud

Build your own cloud? With free open source? Too good to be true?  Not quite. 

 

OpenStack provides you with the opportunity to make resources available to your people by creating your own cloud without paying huge software license fees — that’s true.  But in the years we at Mirantis have been building and deploying production of OpenStack cloud environments , we’ve seen a lot of magical thinking. If you watch out for these ten common mistakes, you’ll be moving in the right direction.

 

 

Mistake #10: It’s open source. Who needs a budget?

 

Very often we hear, “Why do we need a budget for this? We’ll just implement the code off the repo. There’s no license fee.”

 

That last bit is true. There’s no license fee to run OpenStack, but open source software doesn’t just appear out of nowhere, especially for a project as large and complex as OpenStack. Hundreds of people get paid to work very hard to improve the code, which changes constantly, so the latest version of one component requires you to bring in the latest version of everything else.

 

The problem here is that the latest code is always unstable, and critical bug fixes happen at the speed of the community, not yours. Sometimes you’re going to need to pay someone to fix your bugs in your timeframe and not the community’s. Thus, open source code is free at any given moment in time, but there’s always change over time, and time is money.

 

Mistake #9: I can do it by myself.

 

If your entire cloud is small enough to fit on your laptop, you might be able to do it yourself.  If you’re looking at a medium or large cloud, however, get some help.  Most people implement clouds for reasons that aren’t simple; you must understand what everyone else needs — not just you — in order to do this right. Explicitly document your use cases so that you can figure out whether you need a public, private, or hybrid cloud. Is your workload multitenant, long-running, short-running, dedicated, ephemeral, stable, bursty, or maybe even all of the above?

 

Maybe the solution to your problem isn’t even in the cloud. Look at your legacy applications. Do they belong in the cloud, or do they need to continue on your existing infrastructure?

 

None of these decisions can be made in a vacuum.

 

Mistake #8: Everyone understands the terminology.

 

You may believe that everyone understands the terminology, but it is critical to understand the whos, whats, whys, whens, wheres, and hows — collectively. Consider the following sentence we heard in a planning meeting:

 

We built a service to support the service, but when we had problems with the service level, we called services.

 

Really?  Take the time to understand what your constituents mean in precise terms, because there is no common understanding — even with common words. There’s a lot of chatter on the OpenStack forums about the actual meaning of the word “type.”

 

Mistake #7: Assuming legacy systems will go away (or be migrated).

 

There’s a reason COBOL programmers still have jobs. Legacy applications don’t just go away; that’s just the reality.  Just last week, a hyper-enthusiastic system admin told us, “We’re just going to build a cloud and move everything over.” Maybe it’d work, but not quickly. Some legacy systems, such as certain data storage, transactional, financial, and insurance applications are just not ready to move to cloud, especially if business rules have not been well-documented.

 

Mistake #6: All you need is load balancing.

 

This particular fallacy comes from thinking of the cloud as a giant router that just shifts stateless traffic to where it can run the fastest. Think about what workloads you’re moving to the cloud. Is it a dev-test environment? Can you ramp up or ramp down? Can you shut it down in an emergency? Do you need a single component or multiple components? In most cases, you can’t scale an application just by cloning its elements; not all constituent services can maintain consistency across replicas unless they’re architected from the get-go.

 

Mistake #5: We don’t need to talk to the developers.

 

In OpenStack cloud, applications can exercise far more control over the platform they run on than in a conventional environment, but with great control comes great responsibility. Operations and developers need to play nice with each other —you need expertise at the developer level to, both, properly architect and operate the solution.

 

That’s not to say that admins and operators should just get out of the way. You’re building an IaaS cloud with OpenStack so developers can use the infrastructure more easily. You want to give the developers just enough choices to make them successful, so create a menu — don’t give them the run of the kitchen.

 

Mistake #4: Our staff has the skills.

 

We often hear, “Our staff has the skills. It’s just like Linux.” Sure, if your organization is endowed with source-code masters for IP networking, hypervisor resource management, storage redundancy and optimization, source-code management, security and encryption, driver optimization, distributed application architectures, and any number of other technologies involved in OpenStack, you may be right.  Chances are, though, you’re missing one or many of these skills, and your staff needs to know that.

 

Everyone can use Linux, but not everyone’s a kernel engineer. Ultimately you can become the source-code master who knows everything, but it’s not overnight.

 

Mistake #3: We can monetize later.

 

“Cloud introduces great efficiencies.  It’ll pay for itself.”  Go ahead and try to get that past the CFO.

 

More than likely, you’re going to need new hardware and it’s not going to be the light and cheap stuff. And smart people don’t work for nothing. You’re going to need to train the people who don’t know everything they need to know. Oh, and do you also have a vacant, water-cooled data center nearby?

 

You may also need a new business model. Your existing infrastructure was funded with a set of assumptions about utilization by different functions and business owners – assumptions that likely no longer hold true. Where are your users going to get the money to support the cloud?

 

Understand your users’ economics and you’ll understand the value of your cloud.

 

Mistake #2: The cloud fixes itself

 

A cloud is not auto-magical, but, with the right monitoring and maintenance, it can, indeed, fix itself — sometimes.  But you need to make sure you have the right monitoring and the right redundancy, especially for alerting when capacity thresholds are near. You might not know it’s broken until it doesn’t fix itself; then you’re going to get that 3:00 A.M. call, and you must be prepared.  Remember that engineer who knows everything?  She’s not always going to be available.

 

Mistake #1: Failure is not an option

 

Finally, there’s the romantic old-school belief that failure is not an option. In fact, when it comes to cloud, failure isn’t just an option — it’s a core design principle.  Fail often and fail fast, so you can move quickly. Just make sure that your systems and applications are prepared for the moment when something goes down and they have to adjust.

 

“Automate all the things,” as the saying goes — especially the notification of failure.  This way, you can really enjoy this new technology as your systems keep going even when something doesn’t go quite according to plan. That’s when your organization gets the maximum benefit from the cloud.

 

Isn’t that why you wanted OpenStack in the first place?

1146

Top Startup and Tech News Today-7 Things You Missed Today

1. IBM Commits an Additional $1 Billion to Linux Innovation

 

IBM announced at LinuxCon that it would invest $1 billion in Linux and other open source technologies. The hope is that this investment will let clients utilize big data and cloud computing. This is IBM’s second commitment of $1 billion to Linux development. With this announcement came the unveiling of the Power Systems Linux Center initiative in France and the Linux on Power development cloud initiative. Both are intended to expand IBM’s support of Linux open source vendors and applications.

 

The Linux on Power development cloud initiative is done in hopes of expanding IBM’s Power System Cloud. Users will be allowed to access a no-charge cloud service that will give developments, partners, and clients the ability to “prototype, build, port, and test” their Linux applications. IBM VP of Power Development, Brad McCredie, says that “the era of big data calls for a new approach to IT systems; one that is open, customizable, and designed from the ground up to handle big data and cloud workloads.”

 

2. How Facebook Stands to Gain by Sharing Its Trade Secrets

 

Companies used to live by the idea of secrecy, and guard their operations in order to ensure that competitors never gained an advantage. This used to be the method that most corporations employed; however, Facebook changed this game by disclosing a very detailed report on how they ran their data centers, powered their website, and developer their mobile apps.

 

This 71-page report also details the company’s approach. This includes everything from removing the plastic bezels from their servers to reject app modifications that increase power consumption. This report was published as part of the multi-company effort called Internet.org to bring the Internet to the next 5 billion. This effort has generally been called a philanthropic effort, and an effort of economic empowerment and human rights, but there is, naturally, plenty to gain from Facebook in terms of opening up huge new markets.

 

Asides from opening new markets, Facebook has a lot to gain in terms of sharing such information: it makes their own life far easier. If Facebook can get companies thinking how they think, they’ll buy similar materials that Facebook runs on. The less “exotic” and special something is, the cheaper its cost will be. Facebook has a large enough presence that it can easily steer product decisions.

 

Facebook is not the only company to share their secrets and embrace open-source software; there are many other companies that do the same. But, they are one of the larger companies to do so, and though they stress the charitable nature of their action, there is a clear economic advancement that can be gained from doing so.

 

3. Iran restores blocks on Facebook, Twitter

 

Iran’s block on Facebook and Twitter was lifted for several hours. The brief access was a “technical glitch” that was quickly fixed. Those who managed to gain access only gained it for a brief period of time. This points to increasing struggles between groups seeking to have Facebook and other social networking sites unblocked by those working in the Iranian government, who have firm control over Internet access.

 

Many Facebook and Twitter users in the capital, Tehran, assumed that the brief Internet freedom was the result of a new policy from President Hasan Rouhani. Many people wrote on their social media accounts, praising him for the new openness in Iran. This praise was quickly subdued when the social media sites became no longer available on Tuesday morning.

 

4. What will iOS7 do for your iDevice?

 

iOS7, the first operating system designed by Jony Ive, the man behind the physical look and feel of all Apple devices, will be ready for download on Wednesday. But, even if your device is compatible, not all promised 200 new features of iOS7 will be available.

 

The latest OS brings a serious overhaul of Siri to make her performance more in line with what Android offers via Google Now. Siri can now directly plug into Wikipedia, Twitter, Bing, transit routes, traffic updates, and even the user’s own photo album. But, not all headlines features will function on every Apple device; ultimately, it depends on each device’s processor, RAM, and screen resolution requirements.

 

Here’s a list of what iOS7 will do for you:

-       Airdrop, a protocol for sharing files over wifi, even when there is no signal, will be coming to the iPhone 5, Touch (5th generation), iPad 4 and iPad Mini.

-       Siri will be updated with a new graphical interface and the ability to tap into Wikipedia and Bing for web searches.

-       iOS7 will include lens filters which will only be available on the iPhone 5 and the iPod touch (fifth generation.) You can now apply effects before you take the photo.

-       iTunes Radio will work across iPhones 4, 4s, and 5, and the iPad 2, 3, 4, and mini.

 

5. Google buys Bump app for sharing smartphone files

 

Google has bought  out Bump, the smartphone app that lets you share contacts, pictures, and other data by bumping” smartphones together. Google has bought out the Bump team but is leaving popular Bump application available to users. “We couldn’t be more thrilled to join Google,” Bump co-founder and chief, Lieb, said “Bump and Flock will continue to work as they always have for now; stay tuned for future updates.”

 

The deal has been reported to have been worth $30-$60 million.

 

6. AT&T Promises (Again) Not To Disconnect Your Account If It Suspects You Of Illegally Downloading

 

Even though its copyright warning letter says AT&T will cut users suspected of illegally downloading copyrighted material off from Internet, AT&T says that it will not. The letter warned that illegally downloading was a violation of AT&T Term’s and could result in “a limitation of Internet access or even suspension or termination” of the account.

 

The letter is a part of the “six strikes plan,” where nation’s ISPs send warnings to those they think are breaking copyright law. This is supposed to be about education and repeat violators are not supposed to be cut off from the Internet; instead they are supposed to be temporarily redirected to another page where they will be required to view educational materials on copyright. AT&T says that the letter in which they warn off cutting people off from internet is simply telling people what could happen should the person be guilty of illegally downloading under the Digital Millennium Copyright Act. But the six strike warnings are merely allegations, AT&T promises it won’t be closing down anyone’s internet.

 

7. Google’s AdID to take a bite out of third-party cookies

 

Google is fed up with the third-party cookies. So, they have a plan called AdID to get rid of them from your online advertising. This plan could upend the $120 billion online advertising business while giving more control over which ads are shown to customers and to Google. An anonymous source at Google says that AdID could give Google a big bump in the company’s online ad business (Google currently controls 1/3 of all online advertising revenue.) “The AdID would be transmitted to advertisers and ad networks that have agreed to basic guidelines, giving consumers more privacy and control over how they browse the Web,” said the anonymous source.

 

Google, on the other hand, designed that any plans were imminent. “We believe that technological enhancements can improve users’ security while ensuring the Web remains economically viable,” a Google spokesperson told CNET. “We and others have a number of concepts in this area, but they’re all at very early stages.”

 

 

 

 

519

Top Startup and Tech News Today-7 Things You Missed Today

1. How eBay Could Rescue Bitcoin From the Feds

 

Bitcoin exchanges have run into a hurdle in the form of the U.S. banks. There are questions about whether or not they “meet federal and state money transmission business regulations.” While this is quite a setback, another company is in prime position to take advantage of the situation: eBay. It had a “virtual currencies” section, allowing people to sell and purchase Bitcoins—it’s a forum for Bitcoin exchange, bypassing the federal and state regulations via PayPal.

 

The only thing preventing eBay from taking advantage of this opportunity, should they choose to do so, is the fact that Paypal allows chargebacks. Someone could purchase Bitcoins on eBay and simply state that the Bitcoins weren’t delivered, defrauding the seller. If eBay manages to solve this problem, PayPal could be in even bigger competition with Bitcoin. “They could very well find their business model outdated,” states financial regulations lawyer, Van Cleef.

 

2. Google is joining the Open edX platform

 

Google released Course Builder, an experimental platform, last year to test the waters in online education. It was well received with a multitude of different online courses available with various institutions experimenting with MOOCs (massive open online course). To continue with the online education front, Google has decided to join Open edX, a non-profit aiming to provide interactive online courses, as a contributor.

 

The effects of the combined efforts of both companies will provide much for the developers and consumers. Director of Research, Dan Clancy says, “We hope that our continued contributions to open source education projects will enable anyone who builds online education products to benefit from our technology, services and scale. For learners, we believe that a more open online education ecosystem will make it easier for anyone to pick up new skills and concepts at anytime, anywhere.”

 

3. Consumer: Stay Smart to Avoid WiFi Hackers

 

Becoming a super connected metropolis with free WiFi everywhere sounds great, but it also has its cons. One glaring problem is the presence of WiFi Hackers. Leeds is one such city that hopes to realize this vision.  A survey done on Britons was done to examine their WiFi use and determine how safe people really are.

 

Half of the surveyed do not know if the WiFi hotspot they use is secure, opening them up to identity fraud. Two thirds use the hotspots to check their email, a smorgasbord of personal information. Even more surprising, ten percent of people access their bank accounts with the public WiFi.

 

A brief list from these findings states that: important online tasks should stay at home, remove automatic connections on your mobile device, and don’t use apps whose encryption method is unknown.

 

4. Microsoft Seeks Cloud, Mobile, and Gaming Startups in London’s Tech City

 

Microsoft launched a 12 week accelerator program for UK cloud, mobile, and gaming startups in East London Tech City. 20 startups will have the opportunity to gain mentorship from executives from Microsoft, Train2Game, Lift London, and more.  This program is the latest of 10 around the world by Microsoft. The success rate of companies, from a total of 119, getting funding (within 6 months of the program’s end) is 85 percent! The kicker, though, is that Microsoft does not plan on taking equities from the startups. Rather, they will hope that the accelerator program will help to create future successful partnerships and additions to the Microsoft family.

 

5. Facebook Rolls Out “Professional Skills” Section on User Profiles

 

Facebook tries its hand at doing what LinkedIn has been already been doing, acting as a professional outlet for users. It recently included a new feature that allows users to add professional skills to their profile. Facebook takes this one step further than LinkedIn in that they connect skills to relevant interest groups, giving potential hires even more exposure. For those who worry about privacy, there is an option to adjust the privacy settings on the resume.

 

“If Facebook’s Professional Skills feature takes off, you’ll be able to browse through friends’ vacation picks and potential hires, all at the same time.”

 

6. What Startups Need to Know about Obamacare

 

With Obamacare coming out soon, startups have more health insurance options available to employees. Plans will come in 4 flavors—the typical Bronze, Silver, Gold, and Platinum setup, each with increasing cost and coverage.

 

Exchanges will start on October 1st, 2013—small businesses can take advantage of this time and look at the exchanges and plans. Since insurance companies will not be able to deny anyone, the rates for insurance will increase, especially for those below the age of 30. However, most of the regulations placed onto small businesses are delayed until 2015 instead of 2014.

 

7. Fun: First Actual Computer Bug Was Found Today, 66 Years Ago

 

It’s time to celebrate the 66th birthday of the first discovered computer bug! In 1947, the Mark II Aiken Relay Computer in Harvard had a peculiarity in its system—a bug. For all the technophiles out there, it, unfortunately isn’t the metaphorical bug we all know of; it was literally a bug; a moth. The person who helped to publicize this and coin the term “bugging” and “debugging” is Grace Hopper. The moth itself exists in a logbook in the Nation Museum of American History, but, unfortunately, is not on display.

5312

Is Open Source A Good Business Model For Your Company?

While terms like “cloud,” “big data,” and “devops” may be over-used and over-hyped, it should be clear to anyone in the industry that we are undergoing a fundamental shift in the way IT is delivered, consumed, and even conceived. What is also clear is that this new era in computing is being both driven and dominated by open source.

Ask almost anyone what the most significant and exciting technologies are, almost anywhere in the stack, and you are likely to hear the name of an open source project. Ask about Big Data, and you will most likely hear Hadoop or Cassandra. Databases? MongoDB , CouchDB, and Riak. Storage? Gluster and Ceph, come to mind. Networking? OpenFlow and now Open Daylight. Cloud as a whole? OpenStack seems an unstoppable force. Mobile? It’s hard to ignore the tidal wave that is Android.  Application delivery and devops? Take your pick of Chef, Puppet, Salt, Jenkins, or Docker.

 

As Eric Knorr recently wrote, “We’ve come a very long way from the old saw that ‘open source doesn’t innovate.’ Instead, you might ask: Is innovation in enterprise software happening anywhere else other than in open source land?”

 

Of course, any discussion of open source inevitably comes around to someone asking, “While this may be great for innovation, can open source be a sustainable business model?” As someone starting his second stint as CEO of an open source startup, I can answer with an unequivocal, “It Depends.”

 

Open Source makes sense for an increasing number of situations, especially when a company is trying to disrupt proprietary incumbents or when (as is true almost everywhere) there is a limited window to become prominent in a rapidly changing ecosystem. Indeed, I would argue that, in many situations, trying to make it as a small open source company is far less risky than trying to gain acceptance as a small, proprietary company.  Open source brings its own unique challenges, however, and certainly isn’t appropriate for everyone.

 

With that in mind, here are some questions to ask if you are considering becoming an open source company. The more that you answer “yes” to, the more likely that open source is the right strategy for your company.

 

 

Am I trying to sell into a market with entrenched, proprietary competitors?

If so, being open source can get you into accounts that would never speak to a proprietary startup. Additionally, it gives you the opportunity to compete on battlegrounds that favor you over competitors with larger sales forces, marketing budgets, etc.

 

Am I trying to enable an ecosystem and are there important open source projects around me in the stack? 

If so, being open makes it much easier to form and integrate into an ecosystem.

 

Do I have a clear idea of how to add value on top of the open source version, while making the open source version robust and valuable?

There are many interesting variations on the open source model, but they all depend on having both a big “top” of the funnel (lots of people using, trying, or loving the open source product), as well as a clear reason for a meaningful percentage of those people to pay (e.g. a managed service offering, support). If the only way to get people to pay is to make the open source version substandard, you won’t likely succeed.

 

Will being open source make me radically better than the alternative or will it just make me a cheaper alternative to an already good solution?

The best open source companies use being open to make themselves radically better, at least for certain markets or applications. For example, MySQL wasn’t just cheaper than the proprietary RDBMs, it was better for PHP and helped enable an entire stack (LAMP).

 

Is the nature of my project such that “many eyes” and “many contributors” will make it better? 

In my experience, this is more likely to be true of fundamental technologies, and less likely to be true for things dependent on an elegant user interface.

 

Is my project such that being tested at very large scale is key to success?

While you may never get paid by large universities or national labs, there are few places better to prove your product out at massive scale.

 

Do I understand the implications of being open source on development, QA, sales, marketing, financing, etc.? 

Being open was key to Gluster’s success, and has been key to how Docker is approaching the market. The decision has had significant implications for all aspects of the company. You can’t be “half open” any more than you can be half pregnant. Make the decision wisely.

370

How I Built Scaleable Web Apps on the M.E.A.N. Stack

“M.E.A.N.” is one of the coolest stacks to develop on .

 

In my opinion, M.E.A.N captures the essence of modern day web development. It neatly ties in the concept of a NoSQL database with a modern javascript framework and gets them to communicate via Node.js.

 

Getting started with the MEAN stack might seem very daunting at first. You might think that there is too much new technology to learn.  The truth is that you really only need to know “Javascript.” That’s right, MEAN is simply a javascript web development stack.

 

So, how do you actually get started developing on the MEAN stack?

 

The first step is to set up a project structure. I’ve found the following structure to make the most sense:

 

controllers/ db/ package.json server.js public/

 

This structure lets you keep the entire stack in a single project. Your AngularJS front end can go into the public folder while all your Express API logic goes into controller and your MongoDB collections and logic go into the db folder.

 

Now that you’ve set up a general project structure, you need to initialize your public folder as  an Angular project. It is best to do this using a tool called Yeoman.

 

Yeoman is a toolkit that makes it easy for you to get started with a variety of Javascript frameworks and other web frameworks like Bootstrap and foundation. You can learn more about Yeoman at Yeoman.io.

 

Installing Yeoman is pretty straightforward.

 

npm install -g yo

 

Yeoman is built around the concept of generators. Think of them as a set of rules that Yeoman needs to generate the project structure and files for your project. In this case, we will be installing AngularJS. First, you have to get the generator for AngularJS by installing:

npm install -g generator-angular

This code  will install the scripts and templates needed for you to bootstrap your Angular app.  Next, cd into your public directory we created and generate the project into this folder by running:

yo angular

It  will prompt you with a series of questions that will then tailor the generated files to your needs. Once this is done, you will notice that the structure seems like another node app and you’re right to think that!

Yeoman and the tools that Yeoman uses (like bower and grunt) are all using npm to get the modules they need to build your app.

To run your yeoman app, simply go into the public folder and run grunt server. Grunt is essentially made  for web apps. It has a set of rules you can find in Gruntfile.js that outlines what needs to happen for different options.

Running grunt server will run a small server that will serve your website. Grunt build will go through all your source code, minify HTML, JS, SCSS and images, and spit the result out in the dist folder.

You can learn all about yeoman and grunt at Yeoman.io.

Your basic development workflow would be as follows:

In one terminal window, you will have grunt server running. This will run your AngularJS app. In another terminal window, you can have your node app running, providing the API for your AngularJS app.

The reason I personally love doing this is because grunt server will refresh your browser on the fly when you make SCSS changes and this is very useful when you’re developing.

When it comes time to deploy to production, running grunt build will output an “optimized and minified” version of the website to the dist folder. This is the folder you can setup in your server.js Express.static and use when someone hits your node server.

app.use(express.static(__dirname + ‘/public/dist’));

As a developer building a scalable web app, MEAN has profound implications. You can leave all your templating to render on the client side, lessening the load on your Node server. Your Node.js code is exposed via an API that your AngularJS app can connect to. This  ensures that in the future if you want to build a mobile app or open up your API to third parties, it’s very simple since the API’s are already created.

Lastly, one of the more interesting things about the MEAN stack is that because it is so loosely decoupled, if you wanted to replace any of the technologies with another technology, you could easily do so without changing the other platform. For example you could replace AngularJS with Ember, or Node.js with Python.

2189

OpenStack's Third Birthday - a Recap with a Look into the Future

OpenStack was first announced three years ago at the OSCON conference in Portland.

I remember the first time I heard about the announcement and how it immediately caught my attention. Ever since that day, I have become a strong advocate for the technology. Looking back, I’ve often wondered why OpenStack earned my loyalty so quickly.

 

Was it because OpenStack is an open source cloud? Well, partially, but that couldn’t be the main reason for my interest. OpenStack was not the first open source cloud initiative; we had Eucalyptus, then Cloud.com and other open source cloud initiatives before OpenStack emerged.

 

But these open source cloud initiatives were started by unreliable companies that lacked the commitment for a true open movement. I knew that a real open source cloud movement couldn’t meet its potential as an industry movement if startups led it. I knew that experience in the field gave OpenStack a much better starting point.

 

I also knew some of the main individuals behind the initiatives and their commitment to the Open Cloud and that made me confident that the OpenStack project would have a much higher chance for success than its predecessors. After three years, the game is essentially over and it’s obvious who’s going to win the open source cloud war. I’m happy to say that I also had my own little share in spreading the word by advocating the OpenStack movement in our own local community which also grew extremely quickly over the past two years.

 

OpenStack as an Open Movement

Paul Holland, an Executive Program Manager for Cloud at HP, gave an excellent talk during the last OpenStack Summit in which he drew parallels between the founding of the United States and the founding of OpenStack.

 

Paul also drew an interesting comparison between the role of the common currency on the open market and its OpenStack equivalents: APIs, common language, processes, etc. Today, we take those things for granted, but we cannot imagine what our global economy would look like without the Dollar as a common currency or English as a common language, even if they have not been explicitly chosen as such by all countries.

 

We often tend to gloss over the details of the Foundation and its governing body, but those details make OpenStack an industry movement.  This movement has brought large companies, like Red Hat, HP, IBM, Rackspace and many others, to collaborate and contribute to a common project as noted in this report. The steadily growing number of individual developers year after year is another strong indication of the real movement that this project has created.

 

Thinking Beyond Amazon AWS

OpenStack essentially started as the open source alternative to Amazon AWS. Many of the sub-projects often began as Amazon equivalents. Today, we are starting to see projects with a new level of innovation that do not have any AWS equivalent. The most notable ones, IMHO, are the Neutron (network) and BareMetal projects. Both have huge potential to disrupt how we think about cloud infrastructure.

 

Only on OpenStack

We often tend to compare OpenStack with other clouds on a feature-to-feature basis.

 

The open source and community adoption nature of OpenStack enables us to do things that are unique to OpenStack and cannot be matched by other clouds, like:

  • Run the same infrastructure on private and public clouds.
  • Work with multiple cloud providers; have more than one OpenStack-compatible cloud provider with which to work.
  • Plug in different HW as cloud platforms for private clouds from different vendors, such as HP, IBM, Dell, Cisco, or use pre-packaged OpenStack distributions, such as the one from Ubuntu, Red Hat, Piston etc.
  • Choose your infrastructure of choice for storage, network etc, assuming that many of the devices come with OpenStack-supported plug-ins.

 

All this can be done only on OpenStack; not because it is open source, but because the level of OpenStack adoption has made it the de-facto industry standard.

 

Re-think the Cloud Layers

When the cloud first came into the world, it was common to look at the stack from a three-layer approach: IaaS, PaaS and SaaS.

 

Typically, when we designed each of the layers, we looked at the other layers as *black-boxes* and often had to create parallel stacks within each layer to manage security, metering, or high availability.

 

Since OpenStack is an open source infrastructure, we can break the wall between those layers and re-think where we draw the line. When we design our PaaS on OpenStack, there is no reason why we wouldn’t reuse the same security, metering, messaging and provisioning that is used to manage our infrastructure. The result is a much thinner and potentially more efficient foundation across all the layers that is easier to maintain. The new Heat project and Ceilometer in OpenStack are already starting to take steps in this direction and are, therefore, becoming some of the most active projects in the upcoming Havana release of OpenStack.

 

Looking Into the Future

Personally, I think that the world with OpenStack is healthier and brighter for the entire industry, than a world in which we are dependent on one or two major cloud providers, regardless of how good of a job they may or may not do. There are still many challenges ahead in turning all this into a reality and we are still at the beginning. The good news, though, is that there is a lot of room for contribution and, as I’ve witnessed myself, everyone can help shape this new world that we are creating.

 

OpenStack Birthday Events

To mark OpenStack’s 3rd Birthday, there will be a variety of birthday celebrations taking place around the world. At the upcoming OSCON event in Portland from July 22-26, OpenStack will host their official birthday party on July 24th. There will also be a celebration in Israel on the 21st, marking the occasion in Tel Aviv.

 

For more information about the Foundation’s birthday celebrations, visit their website at www.openstack.org.

253

Patents and Open Source: Having Your Cake and Eating It, Too

Contrary to popular belief, protecting your business through patents is not necessarily at odds with an open-source approach.

The two can be combined with successful results. One dynamic start-up working in machine translation software filed a patent in its early days—but the inventor still refined the technology via an open source platform, helping to turn the technology into an industry standard.

 
Patents have many uses. They are more than a barrier to keep rivals out. They can be the basis for fruitful collaboration or vital commercial intelligence.  They can also help you build brand value. Patents are an asset but also a tool. For small companies and start-ups, patents may convince wary investors in the cautious money markets. The IP in any innovative project, and how it is managed, can be the dealmaker: in the early stages of a new venture, it might be the only thing that can secure finance.

 

Take the example of German computer scientist Philipp Koehn, who together with his professors at the University of Southern California, developed and patented a new model for statistical machine translation.

 

Translation by computers had been around for a long time but, until Koehn came along, machine translation had not come close to reaching its full potential. Koehn and his team came up with a phrase-based model (translating words in their context, rather than one word at a time), which was a cut above the old sluggish word-based systems. Their new technique found the best statistical match for words based on how often they appeared in pairs of existing translations. The results were astounding. Today most of the big names in Internet translation have integrated this technology. For their invention, Koehn’s team was nominated for the European Inventor Award 2013.

 

By patenting their technology, Koehn and his team could secure crucial financing (from both venture capitalists and public research programmes), and after seven years their company, Language Weaver, was bought by SDL for $42.5 million. But Koehn later opted for an open source approach, convinced that it could lead to broader research, development, and usage. At the University of Edinburgh, he started an open-source platform called Moses, which he still runs today. A community of enthusiastic researchers around the world can directly access and contribute to the software, effectively crowd-sourcing improvements.

 

The economic benefits are two-fold: Firstly, many companies have already integrated the free open source technology into their organisations, creating a big user- and client-base. Secondly, a multitude of new software companies have been founded, expected to turn phrase-based statistical machine translation and related services into a several billion-dollar market in the coming years.

 

Koehn understood that there is a huge world market for ‘imperfect’ translations that allow people who need a quick translation to get the gist of what is being said. Today Moses is used by big companies for document and website localisation, so customers all over the world can read their instruction manuals or properly install a computer programme.

Patents on software?


This example shows that the traditional proprietary patent model is not incompatible with open source. But it also debunks another myth, mainly about so-called “software patents.” It is true that the lines of code in a programme are protected only by copyright (like the lines of text in a story), but not by patents. However, a great many inventions we know today, like GPS, wi-fi, Bluetooth and mobile phones all rely on software. If they are new and inventive or solve a technical problem (“How can I use satellites to guide me to my destination?”) they can be patented.

 

Such patenting is possible when the product or process of the invention offers more than mere lines of software. The resulting patent protects the invention regardless of the particular lines of code or computing language used in the underlying software. Koehn’s phrase-based model of statistical machine-based translation is one of these “computer-implemented inventions.”

 

Of course, the value of non-patentable inventions cannot be denied. Much of the service industry sector, and the creative industries, have little use for patents, despite innovating to a very high degree. After all, only a minority of businesses are innovating in sectors where patent ownership is relevant. Even so, there are many other ways in which the patent system can be used to create commercial advantages.

 

358