@Aritmatics, thank you,

now please present some hypothesis to those ;)
- With all the advent of this new 'cloud,' what have the main benefits to the users been? Curse of choice?
The Cloud puts a supercomputer in every user's pocket. Big Data, Internet of Things and a whole lot of other buzzwords will become accessible to the consumer thanks to the underlying Cloud revolution.

We're stepping into the age of "IT Resource" abundance. The days of resource limitation (memory, processing power, network bandwidth, storage capacity, ...) are slowly being overcome.
I was thinking along the opposite lines: the cloud provides so much potential that what the users are seeing now is nothing compared to what will come, being really enthusiastic. The worst part of it is: there's no guarantee for said progress, because this whole cloud thing grabs all the 'distributed' warts with it.
i thought of adding some infographs...
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
FluxBB bbcode test
arithma wroteI was thinking along the opposite lines: the cloud provides so much potential that what the users are seeing now is nothing compared to what will come, being really enthusiastic. The worst part of it is: there's no guarantee for said progress, because this whole cloud thing grabs all the 'distributed' warts with it.
Here are the benefits that a user can get today thanks to the Cloud:
  • Be part of a social network that has over 1 Billion member
  • Have a file storage that is available everywhere.
  • Have access to a repository of videos so large it will make your head explode.
  • Store all the pictures in your life and access them anywhere
  • Collaborate in real time on different kinds of documents
  • Have real time access to market prices and other financial indicators
  • Shop online on a platform that sells everything and delivers everywhere in record times
  • Communicate via audio/video with contacts and loved ones regardless of the geographical distances
  • Search a virtually infinite list of indexed webpages looking for patterns, and obtain results in milliseconds
All of these are reliable, highly available and widely spread. Sure these services predate the Cloud. But its is the Cloud that made them widespread, cheap (if not just plain free) and reliable. It allowed us to take what was accessible for the few and make it available for the masses.

Again, the Cloud is only about making huge amounts of resources available. Consumers won't benefit directly from more memory or higher processing power. These are just here to fuel stronger apps that will impact everyone.

The real benefits of the Cloud so far
As of right now (and the foreseeable near future) consumers will only interact with the Cloud through SaaS. As global tech literacy increases, this might change, but for now common users will only see as far as Apps let them see.

In 2014, people taking advantage of the Cloud are IT people. PaaS and IaaS are revolutionizing their world. Facebook, Google, Dropbox, Amazon ... all of these wouldn't be technically possible if it weren't for an underlying Cloud. Even better, you as a developer shouldn't ever worry about scaling anymore. If your app integrates cleanly with the Cloud (which should be the new challenge for apps developer - and which is why you should start learning your PaaS/IaaS APIs asap), the Cloud promises seamless scaling as your app grows.

Of course it's highly theoretical and the devil lies in details, as they say. But the leap forward is huge in terms of what's doable. Seriously, as long as you have the bank account to go with it, a small team of devs can run apps larger than you can imagine.

Again, look at how DigitalOcean is disrupting the hosting market.
There's quite some extensive reading to be done and I wish I could say I have gone through all the threads with the level of focus they deserve, but I have skimmed through enough to make sure my contribution is not a regurgitation of someone else's thoughts. Most if not all of the threads have tackled Cloud Computing from a strict technical standpoint neglecting (to some extent) multiple aspects that influence or completely overturn the directions technologies take. Some of these aspects are, but not limited to:

1. Socio-Political Shifts
2. Socio-Economical Shifts
3. Innovation vs ‘Invention’
4. Closed Industry Disruptions (I have not discussed this point below, maybe I will update this post later).

One can only attempt to predict a probable future based on: either past phenomena with possible cyclical capacity (a point BL brushed upon) and/or by extrapolation based on present and past data points/references and/or a close study of possible socio-political and socio-economical paradigm shifts derived from the fundamental aspects of human nature and psychology.

Socio-Political Paradigm Shifts
If we are to learn anything from the 21rst century happenings is that mature platforms that have reached a certain level of virality and global reach have a huge influence on the underlying technology. (examples: Social Networks, The commoditization of smartphones and smart mobile OSes etc…) Which, in return have a large impact on society and international politics. The latest NSA revelations on governmental espionage have triggered tidal waves across the globe which we will keep on feeling for the foreseeable future. The influence of social media and the democratization of ‘sharing’ has had deep reflections on numerous governments in the middle east region if not worldwide (the catalyst, why, who, the ethics / good and bad, are irrelevant).

The above, has had an impact on information distribution. We have seen glimpses of the coming changes, but it will take sometime before a more powerful paradigm shift takes place (and this will be clearer when I discuss point #4).

Given my current experience in the Gulf; the above has had dramatic effects on how large enterprises are looking at Cloud Computing and Private Clouds. Cloud has already been difficult to sell, as a concept, now it’s even more difficult with all the paranoia (mostly justified) associated with having confidential and mission critical data stored in remote areas. Where large investments are not an issue, where over-abundance of resources is not a problem and ‘wasted’ resources are not even considered in IT related investments, the 'peace of mind' associated with on-site data centers outweighs all the pros of Cloud investments.

That, is not going to change anytime soon (by soon I mean the next 10 to 15 years). The idea in itself is not yet attractive to the decision makers. The Gulf is not a small market and due to its largeness it cannot be overlooked. This is a critical point to commercial infrastructure providers.

Jurisdiction, Espionage, Laws, Accessibility, Disaster Recovery are all major concerns that are not addressed in the region.

Socio-Economical Shifts
How long will East Asian low labor costs continue to drive down the cost of hardware? How long will China remain the factory of the world? How long will the abundance of natural resources be neglected and remain only a topic for statisticians and other theorists to ponder upon? How long will the ‘apparent’ world peace be sustained? How long before global markets experience the next crash? So on and so forth.

Of course, it can be argued that these semi-apocalyptical notions entrenched with uncertainty are at the core of sci-fi interpretations and the content of Nassim Taleb’s books (The Black Swan and Antifragile). Since BL’s inquiries were not bound by a narrow, short term, timeframe, there’s nothing wrong with asking: what if?

China, the factory of the world is currently experiencing some internal backlash. What if this is not an isolated incident and is actually the trigger for a much larger movement. A revolution maybe? The century’s revolution? If history has taught us anything, many events are orchestrated but most of the impactful ones were not. How will the global economy and more importantly technology be affected by the disruption of its ‘factory'?

Innovation vs ‘Invention’
Let me start by inviting you to watch Alan Kay’s cynical, sarcastic but passionate talk about Innovation vs Invention.

Alan Kay
http://www.youtube.com/watch?v=gTAghAJcO1o

While you might agree with some of what he said, or not, and while you might be repulsed by his evident bitterness or annoyance at how things are today, as scientists, engineers, developers and others, I’m sure you can relate and empathize with his notions. I do.

Throughout the contributions to this thread that I was able to read, there’s a large emphasis on innovation and incremental technological change, but there’s no mention whatsoever of Inventions flipping our realm on its head! At the pace technology has been evolving, there’s no reason not to expect some inventions that will have a dramatic effect on all our lives within the 15 years boundary of the question.

Quantum Computers have started to climb the Hype Cycle (http://en.wikipedia.org/wiki/Hype_cycle). This will definitely not reach any level of mass adoption within the 15 years boundary, but it ‘might’ trigger numerous technological forks that might have lesser but important impact on all of us.

The Human Brain Project (https://www.humanbrainproject.eu/); This European endeavor to map the human brain, what will be its impact on Computer Science and more specifically on AI? How will AI impact our understanding of .. practically .. everything? (Ray Kurzweil has interesting ideas about that!)

What about Project Meshnet (https://projectmeshnet.org/)? (this example does not match any of the pre-mentioned projects in magnitude but it matches them in ambition) What about a breakthrough in a decentralized Internet? How will that impact our lives?

and more…

Each of the points above can be discussed with extensive depth. Elaborating and answering the questions I posed will take some time, maybe the content of another LebGeeks meetup? I leave it to you, to dig deeper into each of them.

As a conclusion to my post, please allow me this excerpt from Eric Schmit's book “How Google Works” (A recommended read, whether you like Google or not).

"
Hire Learning Animals


[…] Of course smart people know a lot and can therefore accomplish more than others less gifted. But hire them not for the knowledge they possess, but for the things they don’t know yet. Ray Kurzweil said that “information technology’s growing exponentially… And our intuition about the future is not exponential, it’s linear”. In our experience raw brainpower is the starting point for any exponential thinker. Intelligence is the best indicator of a person’s ability to handle change.

It is not, however, the only ingredient. We know plenty of very bright people who, when faced with the roller coaster of change, will choose the familiar spinning-teacups ride instead. They would rather avoid all those gut-wrenching lurches; in other words, reality. Henry Ford said that “anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young. The greatest thing in life is to keep your mind young.” Our ideal candidates are the ones who prefer roller coasters, the ones who keep learning. These “learning animals” have the smarts to handle massive change and the character to love it.

Psychologist Carol Dweck has another term for it. She calls it a “growth mindset”. If you believe that the qualities defining you are carved in stone, you will be stuck trying to prove them over and over again, regardless of the circumstances. But if you have a growth mindset, you believe the qualities that define you can be modified and cultivated through effort. You can change yourself; you can adapt; in fact, you are more comfortable and do better when you are forced to do so. Dweck’s experiments show that your mindset can set in motion a whole chain of thoughts and behaviors: If you think your abilities are fixed, you’ll set for yourself what she calls “performance goals” to maintain that self image, but if you have a growth mindset, you will set “learning goals” – goals that’ll drive you to take risks without worrying so much about how, for example, a dumb question or a wrong answer will make you look. You won’t care because you’re a learning animal, and in the long run you’ll learn more and scale greater heights.


I will stick to the above, not because it’s Erich Schmidt, but because it holds so much truth that I highly relate to, and can only hope to achieve myself and I’m sure others will relate as well.

P.S: BL, you have sent a young boy in Lebanon, around 9 or 10 years ago, a DVD that was full of Linux Distributions and ebooks on different topics, from C programming books to Linux to cyber security and reverse engineering, through a friend of yours in the north. Most of the times in life, we cannot quantify the impact of our actions on others. That gesture, has had a phenomenal impact on me as a young boy with passion for computers. I haven’t forgotten, and it’s time to say thank you.
@Link- Thank you for some insightful comments. I'm looking forward to point #4.
Link- wroteWhere large investments are not an issue, where over-abundance of resources is not a problem and ‘wasted’ resources are not even considered in IT related investments, the 'peace of mind' associated with on-site data centers outweighs all the pros of Cloud investments.
Just for the sake of argument, I'd like to make a case for Public Clouds. Personally, I haven't made up my mind about the usefulness of private clouds yet.

Holding your data in your own data center won't give you "peace of mind". I spent 2 years working for an investment bank and trust me when I say, we didn't provide peace of mind. The only thing that it will provide is the "illusion of control".

You're not really safe
If you think a distant host is the only one that can spy on you, you're in for a big surprise. Software backdoors can be (and often are) found in all sorts of software, from your OS, your middlewares and the apps you are using. These crawl up into both open source and commercial programs alike. And I'm willing to bet that we are still blissfully unaware of the majority of them.

You don't know what the fuck you're doing
If you're a banker, focus on banking. Don't waste your time trying to manage an IT infrastructure. You don't know how to do it and you're causing more harm than good. Appointing your buddy fresh out of MBA school as "head of IT" when he hasn't seen a single line of code, or has never configured even the simplest webserver, is a recipe for disaster. Your limited knowledge won't teach you how to do proper backups, disaster recovery or general security. You're at the mercy of salesmen from IBM or EMC selling you crap, while your only skill is to negotiate budget with your superiors.

You want peace of mind? Let the pros make all the decisions and write them a fat check at the end of the month.

You can protect yourself from your provider
With proper crypto and other good practice, you can protect your data making it inaccessible even to the Cloud provider hosting it. Don't try to convince me that learning good crypto is too complicated so that running your own private IT infrastructure is a simpler solution.

In short
@Link- I'm not saying you're wrong. The "I'd rather keep my data on premise" is a very serious and real trend. I just think they're misguided thinking like this. Hell, I'm in the business of private clouds and if big enterprises all moved to public ones, my employer would run out of business.

I think there could be legitimate reasons to have a private (or a hybrid) cloud, however "peace of mind" surely isn't one of them.
@rahmu Fully aligned, the single quotes are to emphasize sarcasm :) I do agree that this illusion of 'security' and 'privacy' should not outweigh the benefits of a hybrid model. However, the decision makers in these regions are very biased towards their own opinions and they cannot be easily persuaded. At the end of the day, most of the pros I can think of for jumping into the Cloud are for the benefit of the IT team, and the employees in general. If there's no measurable direct impact on the stakeholders, then it's the least of their concern. (Unfortunately)
Forgot to say something:
P.S: BL, you have sent a young boy in Lebanon, around 9 or 10 years ago, a DVD that was full of Linux Distributions and ebooks on different topics, from C programming books to Linux to cyber security and reverse engineering, through a friend of yours in the north. Most of the times in life, we cannot quantify the impact of our actions on others. That gesture, has had a phenomenal impact on me as a young boy with passion for computers. I haven’t forgotten, and it’s time to say thank you.
That was beautiful :)
20 days later
hello,

at this point i just wanted to say that i havent neglected this topic and thread by intention. i intend to continue the dialog once i get a chance.

for the past weeks ive been traveling on work and doing over 10hrs a day so i am worn out. i still am booked for the coming two weeks.
in the mean time i hope that more people would share their own insight, perception and opinions on this topic.

@Link, it is one thing for a person to give pointers its another thing to have an initiative. you had the initiative and the objective, i just tried to make to give you the pointers to get there faster. i dont know to what extent you have been able to benefit of this but which ever it is, you seemed to have come along way in a short time. maintain your sight on your goals and keep your eyes open on what is happening arround you, hence cloud, automation and robotics.

dunno if i mentioned this earlier but lately a research was done listing 700 average jobs and they contemplated on what things are like in 10 years. it was deduced that 47% of that 700 job listing will not exist anymore in 10 years.

as for rahmu, point your sight towards scalerack related topics, it is the next big thing to follow the cloud and big data.

regards
BL

PS: people, seriously, are you giving this any thought and how it is going to impact you? if you are not comprehending what is being stated, say so so that others would know how to further elaborate and clarify on things.

now for some dinner and back to the hotel room to continue work..
as for rahmu, point your sight towards scalerack related topics, it is the next big thing to follow the cloud and big data.
Can you please elaborate? A google search didn't yield any meaningful result.

Since you rehashed the topic, I just spent the past week at the OpenStack Summit. According to the organizers, almost 5000 people joined to talk about the latest OpenStack release as well as user feedback. Here are the trends that dominated the event:

It's only cool if it works on Docker
Containers are the stars of the show. 2 years ago, nobody knew containers existed. Today everybody needs them and needs them now. Docker and Google's own Kubernetes are on everybody's lips. How can you offer containers as a service? Is it a coincidence that Google announced its Container Engine during that same time period?

Canonical is riding the container hype wave by announcing its own product: LXD or a way to use LXC containers as hypervizors. This should in theory provide the best benefits of both worlds. I haven't tried it yet, so I don't know how much of it is just marketing.

I open a small parenthesis for python developers (openstack is a big pile of python after all). Dox is a project that allows you to run tests inside a docker container. That allows to test multiple versions of python and its dependencies in a sane way. If you're familiar with tox a tool that does the same thing using virtualenv.


CI/CD
It is not enough to know how to build a Cloud. Today you want to automate the deployment as much as you can. You want to build it, patch it, test it frequently. You want to say stuff about Devops. More and more people seem to be testing their deployment continuously and this is yielding results. Better quality, early detections, quicker failures. This particular field is dominated by serious tools turf war.

In a nutshell, in order to automate the deployment of your infra you need two tools:
  • A configuration management tool to handle the metadata about your services, like which version, which package, which configuration file/options, ...
  • An orchestration tool that handles the deployment itself by solving dependencies and distributing tasks
There are so many different choices here, like the old school: And the newer school: Automatic machine provisioning seems to be at the heart of the debate here, with some projects getting some love:
Stop building POC
This is probably the only negative thing I found in this conference. OpenStack has attracted a lot of major players in its community. Red Hat, Intel, Cisco, Dell, HP, IBM but I was also surprised to see big names from the proprietary world like EMC, VMWare, Oracle and even Microsoft. This is good, but with great money comes great marketing hype. While there are some major deployments of OpenStack, like Comcast's deployment of several thousand nodes spanning on 20+ datacenters, the majority of the players in the community are merely building proof of concepts (POC) or really tiny clouds.

What I'm trying to say is that some early adopters have taken the leap of faith, but the Cloud is going to be the future, it's not yet the present of the industry.
i have unfortunatelly lost the mommentum of where my thoughts were going since the last time
i had a propper chance to sit down. so forgive me if my catch up is a bit sloppy when i attempt
to reply to the several threads that i have missed over the last four weeks.


@Arithma, as it was in the 90s, turn of the millenium and today. back int he days when there
was geocities and tripod and all similar, many speculated on what they realy served. some
were ahead of their times, others just lame copy of others. the "disruption" was that such
free service based solutions competed with ISP based expensive services. a few ads here and
there didnt bother most bloggers back then if they got it for free instead of paying to the
likes of 20$/month for just hosting a basic HTML website. i remember back in mid 90s, coders
were charging an average of 400$ for an html page with several paragraphs and two pictures.
yes, those were the days. the point here is, when there is something conventional and at the
same time, something new, the charge is horrific. this phenomena has time over time been
followed by a "disruption". the disruption in this case is the act of rebranding, and providing
different means to acheive the same goal. this brakes the market monopoly, branding and how
solutions are delivered and served. back then, sites such as tripod and the likes offered
free generators that rendered basic sites. this later lead to the development of many
horrific WYSIWYG apps to achieve the same. so market shifted from coders who got to charge
big money, to companies that developed generators or WYSIWYG apps to achieve the same end
goal.

so in todays topic, the cloud and what it means, in short, it is a phenomena followed by a
disription, a shift that will make things once again different to what we are accutomed to.
for example, you no longer own or host a physical server, which you labor to maintain and
operate. instead of having to rely on conventional services and operations, you can concentrate
on the actual part of just having the app with worrying less of the underlying moving parts.
when that staged is reached, it is less of an essence whether it is operating in with your
own tidious labor and more of multiplying the benefit of the single app in question. because
of such, the shift is changing to an age old concept of xAAS, i start from some point and
concentrate on the above and not the bellow. xAAS is based on policies, compliance and SLA.
should the app/solution in question require more strict compliance, then you have a set
of service/products that you can use. if it is more lineant, then something else.

for example, a company can host its public websites on amazon. it can host its intranet
at the local ISP and email at google. its distributed pere where you have brokered the
deal and service that services you best. at that point, you dont care much is it centos
or ubuntu just as long as it is running. or then again, why be dependent on custom made
legacy apps, by your app as a service, for example setup your own bank or insurrance
company, do the paper work with your local authorities, and purchase the application
that does the internet banking and insurance for you. dont buy it, rent it as you use
it, have that app suite run against your selective bank or rates. your local dekkeene
can become your next local bank...

so the opportunities here with the cloud is to come up for example with the equivelent
of html generators or WYSIWYG apps. it is such things to which every coder today should
be prepared and embrace or find themselves to be just as archeic and legacy to the likes
of many old programs that dominated in the 80s and 90s.


@ALL,

defining the cloud, it is next to imposible to give a definite definition of what the
cloud is for the cloud in truth is a generic term that applies and reflects to different
parts of IT. for a CIO, its more to the likes of standardization, concolidation, metrics,
processes and the works. to a coder it more of new areas, new programing languages new
application/solution architectures, etc. to an admin, its more of devops type of work,
no more working as silos where the admin would be a subject matter expert in one field,
for example DB, the applicable knoweldge needs to be extended to other domains such as
frontend apps, auditing logs, resilience and replication, etc.

in short, cloud is realy a generic term. for one it means and applies to a set of disciplines
and to another, another set. the basic idea is that all are revamped to a form that
would make them more compatible and streamline work end-to-end. for example to be able
to deploy a full fledged datacenter in less than a month, or deploy your own branded
netflix equivelent service in just about the same time or less with lost cost.

such simplifcation and standardization enables you to repurpose your set of data.
multipurpose your data. just as with your car, you can go on a sunday drive or use
it to deliver stuff or use it to race, you need to do the same with your data. this
is what the idea of the big data is, the ability to repurpose your data. GE for example
registers about 20tb worth of data from different sensors on a single cross atlantic
flight. over the years, the amount of data has accumulated to a point of saturation.
they took that data, refined and enriched it ending up with new producs and services
that they can sell!
http://www.zdnet.com/general-electric-launches-data-lake-service-to-streamline-industry-big-data-7000032489/
https://www.gesoftware.com/industrial-data-lake

one of the main initiatives of the cloud is to cut down on TCO, improve on OPEX and
cut out on unecessary CAPEX.

Rahmus example of digitalocean is one good example comparable to the tripod and geocities
of the 90s.

look at netflix, they do not own a single datacenter or much of any hardware, its all
on the "cloud", amazon and the likes!

@Link,

selling the cloud as a virtualization, or consolidation, neither suffice to cover on
what the cloud is. such sales pitch and attempts are lame and should be stopped.
everyone wants to be on the bandwagon causing a horrible distortion in relation to the
cloud hype. its to the likes when apple wanted to add an I- prefix to everything. or
then again companies wanted to add an E- prefix to point that companies and services
are "electronic" enabled. lame attempts with poor results of which nothing much remains
today (lets not reffer to Itunes ;). as i mentioned earlier, the cloud covers all aspects,
it doesnt suffice to use kvm or pyhton or whatever, you need to dust everything end-2-end
to be able to claim the virtue of a cloud.

there has been a shift in specifing SLA and criterias. for often the selected solution
has had to have the characteristics of fault taulerance and high availability. even thou
those still apply, there is a shift towards the acceptance of resiliency to be an equivelent.
the idea of selfhealing and resilient technologies enables more distributed architectures
and sourcing. i am keen to see how long it will be before we see more applications
and solutions that are based on resiliency instead of high availability or fault taulerance.
http://radar.oreilly.com/2013/06/application-resilience-in-a-service-oriented-architecture.html
http://techblog.netflix.com/2012/11/hystrix.html
https://www.open-mpi.org/projects/orcm/
http://www.macroresilience.com/category/artificial-intelligence/

an abundance of labor force is not a guarantee of delivering quality on time. this is
something that many companies are struggling with after having outsourced a lot to India.
its because of such that companies want simpler deliveries, on time and on budget. for
this to happen, there has been a change in approach. from a cathedral way of working
to adapting to the scrum and agile way of working.

there is lots of changes ahead, the reason for that is there no longer is an excuse
not to be able to materialize a vision, thought or a concept. today, you can already
scrap on paper what kind of mobile device you want, and what kind of custom os and
sensors on it to be used for a particular use. maybe for example a disposable tablet
to the likes of the disposable waterproof cameras? sold exclusively to traveling agencies.
imagination is the limit. all that we as kids used to dream of can be done today
and we will see lots of innovation. one of my personal interest to follow is on
mesh computing and automation. for example how can your car communicate with a redlight pole
and all individual cars at the same intersection. i think there will be many brainfarts
and many brakethru. in short many things.

in regards to quantum computing, that has been going on for a while. for example
the NSA and google, both are intensively researching on quantum computing. means to
crush numbers at astraunomical rate. the ability the dwell into complex research
at very short period. hell i would be surprised if in 20 years, your local pharmacy
has a quantum pc that would be crunching on ebola equivelent custom refined and personalized
medication to your own gene sequence. a tid bit of imagination ;)

as for eric schmidt, he realy hasnt stated something that has not been stated before. he
just got the spotlight for it. the communities and people with whom i have enjoyed
most of my time happened to be the ones that stereotype society would classify as outcasts,
rebels without a cause, etc. those people were more creative than those who i have seen
in suits and ties. more brave to selfexpress, more brave to question and rethink.
those people did not stop thinking, they did not stick to a deskjob.


@arhmu,
mangle the works slacerack, rackscale ;)
here is a short example or ellaboration
http://research.microsoft.com/en-us/projects/rackscale/

in short hardware that scales in different ways, small footprint, specific usage, grid integration, etc.

a few weeks ago i was checking an implementation where the customer had a heavy duty
storage array that was well loaded with activity. the fun part was that the activity came
from a box similar to this one:
http://exxactcorp.com/index.php/solution/solu_detail/126

such hardware is deployed with the concept of resilience and not fault taulerance.
tell me, how much can you do if you had 192 cores fitted in two 2u chassis, what about
when you have a 40u rack,20x192cpu, that takes hpc and resilient service to a new level.

there is a devils advocacy when it comes to openstack itself. but that is a discussion
of its own.

in regards to containers, mate, they have been arround for ages. starting from the mainframe
era, later seen as compute partitioning. solaris had zones, hpux has the equivelent.
for a long time we have had chroot. and for a number of years, openvz. docker continues
on the virtue of these predecessors, its just being simplified and rebranded. how big
is the fuss, a lot of hype if you ask me. a lot of improvement, the kind we wish for.
the biggest advantange is simplification for deployment. just imagine, instead of
untarin sourceoude and compiling, or installing rpm and configuring, just drop a container
and run. yes it is a big improvement. its been on tidious labor to maintain the rpm, the source code
the compilation. i wouldnt shutout the thought that inthe future we no longer compile,
its all scritpts. nothing new.

i remember some years ago, having a dialog with a friend, when it comes to virtualization, you
want to consume your cpu. you want to consume your ram. you want to do this in a smart way.
with virtualization, you are running multiple kernels on top of one with each capsulated having
its own private "eco-system". what about maximizing by eliminating the need to run useless
number of kernels if they are not required. containers are a must, are a need but do not
serve the whole plethora. they can function for given needs.

yes there is no future without a data plane and a controller plane, that is where we will be
seeing the abstraction of automation and orchestration.

as we have seen over the years, always when there is a new area, there are at least
a dozen upstarts that want to challenge and dominate. only a few survive. if you ask me,
there is one too many amongst, puppet, chef, fabric, salstack, ansible, serf, vipr, etc.
seriously, where to start? how to compare, how to measure? how to wheight?


the topic of PoC is also a topic that can spined off as a separte dialog. everyone wants
to proove something, to proove, those people want to do this the most conventional way,
present a live demo. others do it so by setting a price tag on top ready to signoff if
the targeted customer is keen.

now of to sleep, maybe more opinions and critics to quote tomorrow ;)
Most apps I've come across don't really need distributed computing. They're happy running on a single server.
Then what does the cloud bring me?
Ofcourse they do not need to be. That comes without saying.
there are at least two types of computing apps. One is simpler and the other more complex. Hello world might n8t require resil7ency or high availability or fault taulerance.

As things become more service oriented, resieleny is the more factual characteristic. For example an smtp server would be redundant by:
- dns round robin load balancing
- mail queue buffering
- mail forward retry intervals

If one smtp server fails, the service continues thru the other server. If it fails to forward the mail the first time, it will send again later. In a sense it is resilient but in this case it is not self healing. It would still require an admins intervention.

Look at hpc, its basically grid computing where the app does local calculation and communicated with its server. If that node or app crashes. The service is still running. If it requires more crunching power. You would add more nodes to the grid.

In the case of simple apps, its the hosting that you dont have to worry about. It is the underlying maintenance and availabilty that you dont have to worry about. It would be part of a framework which would mske it easier to extend the app without having to create a middleware app or a proxy app to do that integrstion for you. Integrstion becomes simpler just as it had become simpler to integrate torexample using twitter, Facebook and google api.
rolf wroteMost apps I've come across don't really need distributed computing. They're happy running on a single server.
Then what does the cloud bring me?
Rolf, there's some confusion in this sentence.

Every webapp is, by definition, distributed
Even in the simplest case, you've got a client sending a request to a web server that forks[1] a copy of your app to execute the request. The app will probably be interacting with a database which is yet another component. The fact that all these components sit on a single machine (well, except for the client in most cases) doesn't make it any less distributed. It's just a bunch of component interacting using networking protocols.

[1]: I mean the generic term of forking; it could use threading or an event loop like nginx, but that's an implementation detail.


It's the traffic that dictates the number of server
Take the most monolithic application you can think of. Let's imagine a theoretical application that runs entirely in a single process[1]. Even this app will need to be replicated across servers once the machine cannot handle the incoming traffic. Adding more servers is not dictated by te nature of your app. Every distributed application could be in theory running on a single node. We multiply nodes because we need more resources.

[1]: Some genius somewhere running node.js probably thinks it's a good idea.


Running an app in production on a single node could be a terrible idea
It's not just about the traffic, really. Even on a low scale, I'm not comfortable running an app on a single server.

Question: what do you do if your server fails? If everything is on one node, you have a major Single Point of Failure. Your app is unreachable and it could take hours (if not days) to bring it back online. Not to mention the very high risk of losing data that didn't make it to the last backup. So what do you do? Look at your customer and say "tough luck"?

It's best to design your app with high availability in mind. That means replication, load balancing, db sharding, redundancy, failover, etc... As you can imagine, this would require several machines.

It's more expensive for the customer, but downtime and data loss are, usually, much more expensive.


The cloud is just a fancy term for resource management
It has been said before, the Cloud is just a new way to look at your resources. In the old days if your application was running low on resources, you'd call your hosting service and order a new machine. When you reach a certain scale, you're ordering hundreds of machines per year and this takes a lot of time. In big corporations an order of 50 servers could take weeks to be available. Not to mention that once you receive these machines, you need a lot more extra time to configure them and install your app on them. Cloud computing promises to make this a lot easier by using a blend of virtualization and heavy automation. Some cloud platforms are now offering Autoscaling features, which means that when your app needs more resource, the cloud will automatically know what to do to add them. No human intervention required, you only know about it at the end of the month upon receiving your bill.


It's really about scaling
If you run an app with a few dozen connections a day, and you're pretty sure this number is not going to increase any time soon, you don't need the Cloud. The Cloud is a tool to handle a major pain point in the industry: scaling. How do you handle a traffic that seems to be constantly increasing? It is worth mentioning that int he good old days, higher traffic meant getting a bigger machine (we call this scaling "up"). The cloud allows spawning machines and destroying them on demand. It allows scaling by adding a lot of small machines (scaling "out"). Scaling out will ensure you have no SPOF and is generally made easier thanks to Cloud API.

I hope this helps.