Hello,

among the things i have been contemplating about lately has included the upcoming changes in the IT industry and the industries in turn that it services. going thru those thoughts as part of my job and the works, i thought of enquiring on elementary questions of how others perceive the following:

- knowing the industry and the social and economical arrangement that surrounds you right now in its current form, how and what do you perceive things are going to be like in the upcoming milestones: 2016,2020, 2025 and 2030.
just play with the thought that you are nostredamus and you are attempting to to describe what things would be like concerning these things within your close vicinity. for example, programming or purchasing habit or employment arrangement or retirement arrangement.
- after you have given the previous question a thought, how do you perceive yourself embracing those upcoming changes and challenges. in other words, what are you ready to do to tackle those challenges? are you ready to learn more, are you ready to get out of your comfort zone. are you ready to sacrifice more, will you give up on other dreams and which ones?
- now that you have revised the previous two pickles, have your thoughts changed from what they were? if not, did you revise these from a broader perspective? if yes, what aspects did you take into account? for example, if you are a programmer, is your initial thought that yesterday you programmed in C, today you program in C and tomorrow you will program in C? just an example of the thought line here.

more questions would fit but three would suffice for now.

there has been different ages in history. each age has had an effect on the development of science, culture, economy, etc.
during the upcoming six years, there is a new age we are going to go thru, its not going to be wrapped in a beautiful gift parcel,
it is not going to be filled with glory. there is lots of mayhem and turmoil ahead with the upcoming huge change in economics.

what do you foresee that is going to happen in the next 6 years. and of what you see, what are you doing about it?

Regards
BL

PS: sorry for a crypting post but something that is worth thinking about
I don't like to make unrealistic far fetched predictions. However I can relate some trends I've been seeing in the Industry for a while and I'm convinced we're going to be seeing a lot more of pretty soon. Since I'm a programmer, I'll talk about trends in the programming world. I don't know what this will mean for the consumer.

(And anyway, everybody knows that consumer trends are not dictated by logic; they're dictated by the marketing department at Apple).

The Cloud

It's part of my job to evangelize about The Cloud.

I'm not being extremely original here. The whole industry has been talking about the Cloud for almost a decade now. I'll just re-iterate the important of not being late to jump on that ship. For a developer, this mainly means getting familiar with IaaS and PaaS APIs. For instance, I'd highly recommend getting familiar with as many as these as possible:

IaaS PaaS Those are the ones I have played with, and there are others (like Microsoft Azure or Vmware's vCloud). As a developer, you can safely assume that everything you know about infrastructure, provisioning and deployment is going to change soon. The future is for "smart" Cloud-ready applications that know how to leverage the underlying Cloud.

Rule of thumb: If you're still deploying your website by uploading a new version via FTP, you're in danger of becoming as archaic as your average COBOL programmer. Soon.


DevOps

DevOps is a relatively new buzzword that we're going to hear a lot more of in the upcoming years. It's meant to designate all the techniques that make programmers ("developers" or "dev") and sysadmins ("operations" or "ops") work closer to one another. I want to focus in particular on 2 techniques: Continuous Integration and Continuous Delivery.

CPU is cheap, and it will remain so for the foreseeable future. One of the areas that will be most affected by this will be Quality Assurance. Continuous Integration (CI) refers to the idea of running batteries of automated tests against each and every patch sent against a running source code. A few years ago, projects that had strong testing were ahead of their competition. In a few years, projects that don't have strong testing will be lagging behind. CI is becoming rampant. It's so cheap to set up that some people (like Travis CI) can even offer it for free.

Continuous Delivery (CD) is a slightly newer idea that will also grow remarkably as the adoption of the Cloud will increase. CI consists of running tests against new code. CD consists of running tests against new versions of your platform. Have you ever had a middleware upgrade crash your whole application? How can you test that a new configuration of your web server, or your load balancer is not going to prohibit certain functionalities? CD consists of testing these configuration changes before deploying them to production. From an application developer perspective, that will require familiarity with configuration management tools like Puppet, Chef, Ansible and so many others.

At my job, we have adopted CI/CD as part of our workflow. The feeling of being able to deploy to production several times a day while having confidence that we're (most of the times) not breaking things is priceless. I'm fully convinced that this will be the norm in the industry pretty soon.

If you're interested to learn more about Continuous Delivery, this is an interesting (albeit a little bit too self-promotional) article written by the fine folks at Ansible.

part 2 coming tomorrow. I'm sleepy now
Distributed Storage

Another consequence of the advent of "the Cloud", the next few years will force developers to think differently about persistent storage of their data on disk. A few years ago, your storage options were limited to using a RDBMS or writing to disk in a file (which, unless you were using some network-based tech, like NFS was a poor decision more often than not). The future will offer more complex choices, meaning more flexible and more precise decision making.

The modern developer will need to understand the difference between the 3 types of distributed storage: If you're curious, here's a brief introduction to the subject.

There are also several technologies that are challenging the traditional RDBMS:
And there are more! All these technologies aren't competing. They're complementing each other to offer a wide array of solutions to different problems. It will be up to the programmer to know them and pick the right tool for the right job. Every single time.


Programming languages of the future

Concurrency will be treated as a first class citizen. The traction that languages like Go or Clojure are getting, tells me that in the (near) future, programming languages won't add concurrency features as an afterthought. It will be a builtin feature (and a founding principle) of the language itself. Apps of the future will be distributed by default, leveraging Cloud APIs and modern storage systems. Old-school programming languages like C or Java do not provide the developer with the appropriate tools to deal with the complexity of such systems.

Another big trend I see happening is that static typing is fashionable again. Arguably, the past 5-10 years have seen dynamic languages like Python or JavaScript raise drastically in popularity. One of the features these languages provide is decreased verbosity because of lack of static typing. This impacted programmer's productivity positively and allowed for faster release cycles.

However type theorists are working in the shadow to present better type inference algorithms and implementations have been crawling into lots of modern languages like C++, Go, Rust or Swift. This will release the programmer from the burden of writing the type at each turn, while still allowing compilers to do proper type checking.

Another trend that is tightly linked is that native/compiled code is fashionable again. Go, Rust, C++, Swift but also rediscovered languages like D or Haskell are all sporting native compilers. This seems to depart from the past trend of deploying complex runtimes and calling it a "virtual machine" (JVM-style). Are programmers tired of fighting against their garbage collector?

Finally, there are still a few old news that are worth keeping an eye on. It's the continuation of the trends of the past 5 years or so, and it doesn't seem like it's slowing down. Functional programming is still on the rise. Java's recent adoption of lambdas will definitely have an impact, simply because there are so many Java devs out there. Also, JavaScript is still worth following closely. It will be the universal runtime. I'd recommend looking at projects like Asm.js or Google's NaCL.


Trends in Virtualization

These trends are specific to the industry I work in, but I think their impact is going to be felt in every other field of programming and IT at large.

The first trend is about OS-level virtualization, sometimes called "Containers virtualization". This model of virtualization is a great addition to the traditional model of Hypervisor-level virtualization. In a nutshell, a hypervisor is a software that emulates hardware functions. In order to run a virtual machine, the user would install a separate operating system on top of her hypervisor and have a separate entity to manage. Container technology removes a lot of complexity by relying on the existing OS immediately. Think of it as chroot on steroids. A process running inside a container can only access a limited subset of resources. These resources cannot be accessed by other containers even if they're running on the same OS/kernel.

The benefits of this technique are huge, mainly in removing a lot of unnecessary overhead. Container technology has been around for a while now; OpenVZ is almost 10 years old, Solaris and FreeBSD have had them for a long time too (BSD call them "jails"). However a recent project, Docker, greatly simplifies the way admins, devs and users interact with them. This project is gaining a lot of traction and in my opinion it will heavily influence the way we develop, ship, deploy and host our apps in the future.

The second trend that should be monitored is Network Virtualization. This is the trend of believing that networking hardware is dead and that networking functions should be delivered by software running on commodity hardware. All the major actors in the Industry are now rallied behind the two movements Software Defined Networking and the much more recent Network Functions Virtualization to fundamentally rethink how networks should be done.

It is way too early to know how exactly it will impact us, but it is definitely disrupting the way our networks are built. They'll be more flexible, more fault tolerant and overall more adapted to the huge amount of connected devices that we'll see in the upcoming years.

Of all the trends I have mentioned in this (long) post, I think that last one about network virtualization will be the most disruptive and deserves the closest scrutiny.
Thank you for starting this great discussion BL.

rahmu, what are your thoughts regarding the ethical considerations of cloud computing and SaaS? More specifically, RMS's notion of "Service as a Software Substitute"?
i hope that this weekend i will find time to further comment. I hope for people to realy give this a thought as what we are talking about here concerns almost everyone of you.

Rahmu, Once again you took the bait! you nailed most of the things that are going on right on the dot. Rahmus summary is well composed and worth for others to revise with care. there are things to add and note but more of that later. people ready Rahmus reply and try to grasp it. One thing i would note out at this point and that is Rahmus well composed summary is very technology centric. something you would expect from a technologist. Rahmus compositions keep getting better and better across all discussion threads.

It is rare that i would complement someone, this time i can not deny the respect that rahmu has earned in the various responses he has written. keep up the good work rahmu. I dont know you in person but i dont need to know you in person to appreciate your effort in this forum.

there is an angle i hope people would consider thinking about after giving rahmus summary a review and those are:
- why "join" the cloud
- where will the cloud be?
- what does the cloud impact?
- what is the momentum driving the cloud?

@Samer, regardless of opensource or closed source solution or services, there always will be services. the services are becoming standardized with the cloud. take a good look on how over the past years, commercial companies have recreated themselves via opensource. in addition, remember that tools are tools, they are tools, you will always get a newer and a better model, the important thing
is to remember that a tool is a tool and not the business operation itself. but more of this later. my neighbors are dragging me to the bar for some pool and beer.

Cerio!
samer wroterahmu, what are your thoughts regarding the ethical considerations of cloud computing and SaaS? More specifically, RMS's notion of "Service as a Software Substitute"?
This is a very big issue, but I'm not sure I'm the most qualified to answer. I'm a techie, not a philosopher. So as a disclaimer take everything I write with a grain of salt.

Not as free as rms wants us to believe
I usually tend to disagree with rms (and the FSF at large), both in substance and in form. Stallman sees evil in giving others control over your access to technology. Quoting the first line of his essay (emphasis mine):
rms wroteOn the Internet, proprietary software isn't the only way to lose your freedom. Service as a Software Substitute, or SaaSS, is another way to let someone else have power over your computing.
What he fails to realize is that Free software too is giving someone else have power over your computing. Unless I'm personally auditing every single line of code of all the softwares I'm using, and every line of code of the compiler that built these software and every line of code that built this compiler (if you haven't read this paper, honestly do it. It's probably the most important paper on cyber-security ever written), I will always need to trust someone, giving them control over my access to computing. Access to the source is not enough to give you total control, despite what he claims. Availability of source code doesn't shield you from backdoors. Didn't the US government sneak a backdoor into OpenBSD, arguably the free-er operating system out there?

I disagree with the form of his message. I find it condescending. I don't think a good way to promote a cause is to go towards unsuspecting users and insulting them or talking down to them. I'm writing this from a MacBook; according to the FSF, I'm perpetuating the greatest Evil in computing. It doesn't compel me to listen to their message, even if it made sense.

Try to host it yourself
I like the Open Source culture though. I like having public access to the source code, a public bug tracker, public mailing list, etc. In the same trend, I like hosting my own services. I think it's probably the best option out there, but it won't scale to everyone:
  • Tech literacy isn't universal. I'm a professional Cloud Computing specialist. I know my shit. Not everybody is. It doesn't make them "stupid" or "evil".
  • Even though I am tech literate, it is extremely time consuming to do this, even for the few services I host myself. Self-hosting everything I use, would be the equivalent of auditing every line of code of the applications I use. It's not a realistic expectation.
  • Not everyone can afford self hosting. I don't blame people who prefer paying with their data and/or their exposure to advertisement. I spend north of 100$ per year for my online toys. Because I can afford it.
My opinion is to be pragmatic. Take control of your own services, to the limit of your skills, your time and your finances. Learn who to trust to take care of others. Same goes for offline services. It's often better to do the task yourself (doing your laundry, handling money, growing food, ...) but sometimes you have to trust others to do it for you (cleaning lady, banks, supermarkets, ...). The important thing is learning who to trust.

The issue of choosing who to trust may seem a bit complicated, but it predates Cloud Computing (and computing in general) by far. It's how we started building societies.


Why just SaaS?

I find it weird that rms singles out SaaS as the only evil online service. To be fair he did explain why.
rms wroteRejecting SaaSS does not mean refusing to use any network servers run by anyone other than you. Most servers are not SaaSS because the jobs they do are not the user's own computing.
But he's not correct. If I'm hosting my own Free Software application on top of a PaaS/IaaS cloud, I'm giving my host full control over the computation I make, the data I store, the network connections my app communicate through, etc. Even before the Cloud, old-school hosting still required you trust the company owning the physical machines you rented. Why make the distinction for SaaS only? Simply because rms found a "good" pun with SaaSS? Meh...

Conclusion
Once again, I'm just a techie. Don't listen to my opinion on ethics or morality. What the fuck do I know? My capitalistic way of thinking goes "if there's a market for it, I should probably look at it". I have strong opinions on practical issues like "You should backup regularly, because I guarantee your disk will fail" or "Use crypto everywhere because it'll make a much more trustworthy Internet". But I don't even eat my own dog food. I'm terribly late on a lot of backups, I don't even bother encrypting half the shit I say online. Basically what I'm saying is "Whatever works".
When old idealistic beards talk, their activist tone is very biased by idealism. Something that Rahmu has already pointed out. In turn, Rahmu so far has given a granular description on various areas concerning the cloud. most of the product and tool refered to were opensource related. in the cloud there is room for commercial products and operations. opensource will not be killing the commercial industry. the commercial industry is recreating itself thru opensource.

In regards to hypervizors, the market has become saturated with options. you do not need to have many to state that there are many. in between what exists right now there is more than enough. esx,hyperv,xen etc. once again these are just a tool, the hypervizor tool with each having its pro and cons. the pro and cons are not wheight by the product itself alone. but as well by how well it gets integrated with other tools thru which you can orchestrate and automate. another aspect is the legal policy limitations or staff knoweldge and experience. one might be easier to administer than the other. another might offer more resiliency features than the other. it all depends on what you want to acheive.

there has been a huge misconception in regards to opensource and that is, as an open code, anyone can audit, but in reality how many audit? very few are interested, its more of the geeks or those who have too much free time at hand who would be sifting thru codelines without compensation. we all want our bread and butter. which only states that there is no such thing as a free meal. opensource is not a free meal. any thoughts or ideas that are built upon that is a misconception.

the industry is changing, and it will change dramatically. Rahmus first reply is as good as it gets for a blueprint of the cloud. lets roll back a bit and recall what linux was like back in 1995. linux was deemed to be a lost cause. it was scrutinized by microsoft and sun. and then they began to admit that they can no longer undermine what it is. this enforced and obliged such companies to make more effort on improving their products and sell it at a smaller rate. a big diferentiator between the two (commercial and opensource) uptill latelly has been standardization. lets be honnest, it is a nightmare on the number of linux distros that exist out there. this alone has caused lots of challenges for hosting legacy opensource based OS and apps on just about any hypervizor. one aspect of the cloud is standardization. but why do we want to standardize, simply because we expect to see and have larger operations. we all know that IT yet still is in its infancy. that is where not all other industries have yet fully exploited IT. this continous growth for IT need is of the most important driving forces to standardize in order to be able to control and operate with sanity and feasable cost.

grid computing is no longer an HPC world architecture. every day, more and more concepts from the HPC world are blooming under the "cloud" with scalout solutions. the age of having to scaleup or forklift upgrade an environment is coming to an end. the concept of a scaleout is simple, get more with less. have endless amount of given resource without having to make dramatic investments and changes in the architecture and infrastructure. scaleout as a feature is available in each of the primary categories; file, object and block. for example glusterfs scales nicely with files. it still requires features but is simpler than lustre to operate. a commercial competitor and my personal selection is Isilon. it has lots of features than neither of the other two have and realy serves a requirement. with block storage, you can use scaleio, or vsan with vmware, you have options. you do not need to have a "heavy duty" storage backend. hyperconverged products are grabing a slice of the market such as nutanix and simplivity or then again vblock or upcoming evo:rail. just being scaleout doesnt speak out for itself. each of these have their architectural limitations so do not blindly trust what the whitepapers claim. they do offer a solution. hyperconverged and scaleout are not killer products. they are not a one solution fits all requirement products. they are products that enable to construct infrastructures for given requirements.

Rahmu mentioned the containers, and presented a good comparative analogy of thinking of chroot on steroids. that is about what it is. it grants the possibility to run more processes over a common kernel thus giving you a better ROI. of the few key characteristics of virtualiztion was to standardize configuration and hardware compatibility, hence less admin work. and to be able to further leverage your current hardware. if in the past you had one app runing on top of one kernel runing on top of one physical cpu operating at an average utilization of 5%. that was waste of 95% of your investment. with virtualization, you were able to sitaute about 30 vm, hence 30 app on their own kernels on a single hardware hence multiplying your utilization and increasing your ROI. with containers, you can host even more with less number of kernels making it able to leverage even more from your hardware. but one of the actual reason and benefits of containers is to be able to scale deployement with less administrative work. why have an uber guru administer a single apache instance on a single virtual machine when you can have the same admin guru administrating 200 apache instances on the same virtual machine.

so as you see, with all the variants of virtualization and its morphing into a "cloud" is more of standardizing and optimizing. but for what? again, this same old question. well, IT is not a leading force by itself. it is an enabler. it grants technologies that other industries demmand and it is that demmand that creates the need for solutions. IT based solutions. industries are ever more reinventing themselves with new means of working. this alone has lead numerous researches to deduce that at an average of 50% of common jobs today will be voided within 20 years. the postman wont be delivering your mail. it could be a drone. your farmer could be replaced with a robot machine that multitasks. no more cashiers in the shops, one would monitor about 8 self service cashiers cutting staff by 7/8. yes the world is changing and it is this change that is the driving force to standardize the technology and the way of working with it. that is a lot of just about what the cloud is.

think of what rockefeller did about a hundred years ago. think of the same phenomena but instead of crude oil bonanza. think of it bonanza. even thou the cloud will cause change in employment roles, at the same time it is enabling new roles. with every change, there are casulties but overtime its worth going thru it as on the long run, it is enabling many new things that currently are impossible to achieve without extreme huge operations costs.

one of the key descriptive terms as for what is a fundemental philosophy of the cloud is: to abstract, to pool and to automate. by now each one should have started to think of why go thru the hastle of learning to do things in a different way when i can just do it the same as right now. well, the fact to the case is that there is going to be a division, one part is going to go fast forward with the cloud, and the other part remaining with an arkeic bazar like way of working. to put that into an example, western nations will be going into the cloud, other remaining nations with poor infrastructures, inconsistent level of education and unstable political policies will retain the old way of working.

it is always going to be a bipolar dilema, those who want full control, and those who want to concentrate on the core operations, hence consolidate and outsource. i believe the later is what will dominate. why go thru the haslte of maintaining, contracts, network, protocol, hardware, os, framework applications to run your application on top of that when you can cross out a few ending up with contract and application? that is a lot of saving!

just as what linux did to the IT industry so far, OpenStack will be doing just about the same. in otherwords OpenStack in good and bad is the next linux phenomena. there already are dilemas in regards to positioning and using it and just as with linux there always will be a need for it and maintaining it. just as linux has closed and open developers maintaining it, so does openstack. but openstack alone will not suffice. just as linux has been an underlying OS.

openstack is a tool that underlies something else. time will tell what things we are to see.

more later got to go to sleep now.

BL
most of the product and tool refered to were opensource related. in the cloud there is room for commercial products and operations. opensource will not be killing the commercial industry. the commercial industry is recreating itself thru opensource.
I'll admit I'm mainly (only?) familiar with open source products. My employer has a strict open source-only policy so I never got to play with proprietary products.
the pro and cons are not wheight by the product itself alone. but as well by how well it gets integrated with other tools thru which you can orchestrate and automate.
This is where proprietary solutions really shine. EMC, Microsoft and VMWare have a much more cohesive "ecosystem". It helps when you have a single decision maker.

The open source solution (well, I can only speak about OpenStack) has a lot of good things, but a cohesive ecosystem is not one of them. Instead, it's a patchwork of several efforts that made sense at one point in time for certain people, that now crawled into production of some people, and we have to maintain it until the end of time.

One day, I'll write a major rant about the defects of "design by democracy"...
another aspect is the legal policy limitations or staff knoweldge and experience.
Here's my most important question of all? How does one start learning to use proprietary suites? You mention Isilon below, it sounds cool. I tried looking it up. It would cost me 5000$ to get EMC to teach me how to use it. And they expect prior familiarity with their whole line of products. That cannot be the only way, can it?

In that aspect, Open Source products are more easily accessible in my opinion.
Rahmu mentioned the containers, and presented a good comparative analogy of thinking of chroot on steroids.
Couple of things:
  • Containers won't replace traditional hypervizors. Our network engineers would have no use for containers. Both solutions are meant to coexist.
  • What are the proprietary equivalent of Docker and LXC? Does EMC/VMWare/Microsoft have a product for that?
just as what linux did to the IT industry so far, OpenStack will be doing just about the same. in otherwords OpenStack in good and bad is the next linux phenomena.
You're not the first person to draw the analogy between openstack and linux. I think there is one very big difference between the two: Linux started with one talented, determined student with time on his hands. OpenStack started by the biggest names of the industry getting together and deciding to work together.
There are other changes in the scene as well that are worth considering:

Virtual Reality is going big. Facebook bought Oculus Rift.
Peer2Peer communication: Firechat for example
Bitcoin: http://coinmarketcap.com/
Payments: Apple Pay may really change things. This may be Apple's largest contribution after the iPhone if not bigger.
Robotics/Biotech: Tech companies and venture capitalists are buying outside the software realm again, Tesla has gone big
Space Privatisation: SpaceX, Armadillo...
Some thoughts on Decentralisation: http://blogs.wsj.com/accelerators/2014/10/10/weekend-read-the-imminent-decentralized-computing-revolution/

On the bad side: Where are our 20GHz i7s. Where's my free electricity. Net Neutrality (US corps want to charge for using their internet pipes based on what you "watch".)
@Arithma,

what you have listed are new products and services to enable business and end user accomplish new things. those are the type of things that are created to satisfy a demand, a business demand. these new services and products enable new business and ventures. these operations as well are dependent on an infrastructure. the cloud serves as an infrastructure for these services and operations. old business and new business both are dependent on IT infrastructure and in the future, the cloud. the cloud is the underlying layer that will host and cause a change in the industry from which there is no turning back.

do we really need 20ghz cpu when we can have the equivalent amount of computing power as a scaled solution without additional cost? the industry can put billions into developing such cpu, or then again invest less by exploiting grid and scale approach.

we currently are living in the beginning of the end of conventional resources. this alone is as well a driving force to optimize things. we currently are consuming almost twice the amount of resources in comparison to what our planet is able to yeld. the irony.
that is the challenge with openstack, it has just one too many moving parts
without any assurance that individual features would be developed and maintained
thruout openstacks lifetime.

one of the big things of this century is the philosophy of an ecosystem. i have
always looked down on numerous commercial solutions because of their closed
and limited features and lack of the ability to integrate or further use.
its such things that over employed people in the IT industry who are currently
becoming obselete with what ecosystems facilitate and enable.

big companies are biding on ecosystems. the control of the whole stack. that
is where we are going to see merges between big companies, dog eat dog competition
between ecosystems and with time only to tell who are to survive and coexist
down the road.

in regards to learning, i see that to be a financing issue of your employer and
not yourself. there are numerous official training courses that exist to enable
any person become familiar and compitent with propriatary products. all the major
vendors create curriculum to guarantee a clear comprehention of the products
and solutions which would insure a level of mastery of the product. this is not
a bad thing but a good thing. ive seen guys who have redhat certification
and yet do not fully grasp what LVM realy is and why it exists. that is like
the simplest of examples i can give at this point. today the internet is infested
with pirated materials, dont get me wrong, i am not encouraging, just stating.
but reading such material alone will not suffice to enlight on why certain
solutions are designed as such. i have been to numerous costly trainings
and always found value in them because the instructors have real knowledge
and references. it becomes more of a training and workshop experiencen.
something you can not compensate with just reading documents. but yes, there is
truth to the statement, with opensource there is a saturated amount of documentation
and howto websites thru which you could learn. well for isilon there are also
open communities at emc website and google groups where you can discuss and enquire.
some of those threads are even addressed by developers themselves. ive also
attended training courses concerning opensource and those were just as expensive
delivered e.g. by redhat, hp, etc. so when it comes to training for a solution
and certification, there is no workarround, there is always a cost if you
want to get certified. look at vmware, you cant do the exam if you havent
participated in their course. ive been thinking of doing VCAP since i already have
my VCP. but the amount of training i have to do is redicoulous. the bigger joke
here is to self finance such training to obtain a certification and still
lack on experience that would help to get a job position. voiding the investment
unless you have an actual need for it. one of the things that gets on my nerves
are people who brag about having this or that certification or amount of certification
yet are totally incompitent. as an example, i once had an instructure who bragged
of having over 30 certifications (even listed on linkedin :P ) all in various fields
with many within microsoft technologies yet the person did not know what diskpart
was or how to use it. he was recommending thirdparty tools to do what diskpart
does in windows. grr... i can go on ranting about that but i think i will stop
here. in short, for a company that plans to use a technology, regardless of
whether it is commercial or opensource based, the company always reserves a
budget for proper staff training. one of the reasons for that is that insurance
companies refuse to compensate if something goes wrong because of the lack of
official training which is considered part of a preventative good way of working habbit
and readiness.

in regards to containers, no they will not replace hypervizors. they will replace
certain implementations. a well scaled and orchestrated environment is where containers
and hypervizors complement each other. i recall there were container products in the
market about 15 years ago. one at least got bought out by vmware, i think its derivative
is thinapp. not sure got to verify that ;)

openstack has many oportunities, it is a phenomena just as linux even thou it has
a different driving force form commecrical companies. it will be interesting to see
how far it will get. there is lots of commitment to it. one fo the reason for that is
no single vendor has been able to provide a standardization based on collaboration with
other vendors. the openstack story differs here where all intersted commercial companies
have an equal opportunity to a stnadardized solution stack of which everyone gains.

more later, hopefully in reply as to xAAS that samer enquired about ;)
Training
About training, it seems like it's a chicken and egg product. Employer won't hire you unless you know the tech, and you can only learn the tech if your employer pays for your training... Open Source techs are more accessible, not only because documentation is accessible, but the product itself, its bug tracker, its mailing list, ... Yes, Red Hat offers expensive certifications. But unless you want to take the really advanced ones, they're not very interesting. You're far better off by downloading CentOS and googling a bit.

Standardization
About standardization, I don't really know what's going to happen. I think the idea is that OpenStack will federate the industry behind it simply by sheer power of numbers: It's going to be deployed everywhere, so it will impose compatibility from all apps developers. Maybe that's the way it's going to be?

OpenStack, the reference
I am a big fan of OpenStack. Seriously, it's great to be part of this community today. But I can draw a lot of criticism about it. A founding principle of OpenStack is having the industry giants team up to create a product capable of competing with AWS. And here comes this tiny Ruby shop called DigitalOcean that is capable of rolling out its own cloud, with a tiny team, a tiny budget and in a couple of years. We're talking about a full fledged cloud with SSD provisioning, different storage solutions, a full fledged API, decent IPv6 support and probably other features I cannot think of right now.

Seriously, do we really need to make a big fuss, spend millions, make huge marketing campaign and generally make a lot of noise to do something that a small team of dedicated engineers pulled off on their own?

The Cloud is not a technical problem
We didn't talk too much about this so far, but the challenges of the Cloud is not technical. We have built several clouds, using very different technologies, and they all work. The challenge is human. Once people, and decision maker in enterprises, will understand the things we (cloud engineers) see, the real revolution will start. It will be soon, that's for sure...

Private clouds
@BashLogic: I have a question for you. Do you believe that private clouds are a solution on the long term? On the short term? Do you think a hybrid cloud (a private cloud that bursts into a public one) is a viable intermediate?

A private cloud is a cloud on premise, behind a firewall, within the full control of the company. The opposite is a public cloud like AWS or Cloudwatt.

I tend to believe that public clouds are all you need and that private clouds are just a way to give old school IT people the illusion of "control". But I'm not 100% convinced. I'd like your opinion.
@Bashlogic: 20GHz CPUs will absolutely transform the industry (exponential growth in computing power will anyway.)

The CPU bottleneck affects everything. It is such a prevalent constraint that most people don't even see outside of it.
Of course, however, there are a few examples which are not constrained by it, like most website hosting.

This is an interesting perspective to keep one grounded:
http://muratbuffalo.blogspot.com/2014/07/distributed-is-not-necessarily-more.html

We're going to distributed because we don't have a free lunch anymore. This has enabled us to take up the Cloud thing better since we're doing parallel anyway.
@Arithma,

20ghz, i dont mind as long as power consumption doesnt grow in multiples. cpu speed has grown for decades and the speed has always been an impact. computig power nowadays is not based on single thread operations but on multi-thread operations. if you use legacy 8 or 16bit single thread code, yeah a 20ghz can help you up to a point. such is refered to as a scaleup patchup. if you are multithreading, you do not need such big cpu speed and you can distribute across to many cpu. the bottleneck of cpus nowadays has more to do with incompatible kernel semaphor values or using the wrong schedulers. each application environment can be optimized. optimiziation would include noting kernel semaphores and schedulers.

the constraints of scaleout has less to do with the scaleout concept itself but with the attempt to use technologies at their limits. there are numerous technology components that we are dependent on today whose fundemental architecture was never designed with scaleout in mind. they have been patched and extended. and when you have a stack of protocols, processes, etc. you will be facing limitations at some point. those limitations are not scaleout concept specific. a performance bottleneck of a single device does not define nor states that a whole distributed scaleout solution suffers the same. this reminds me of a military idiom, a chain is as strong as its weakest link.
well with scaleout distributed solutions, they are designed to be resilient to spof and mpof. you can experience a peformance degradation during a spof or mpof. but you will not experience a service outage. resiliency is the new attribute when making use of erasure code based technologies.
@Rahmu

the issue of certifications really ticks me of as it realy doesnt prove much. a persons portfolio proves even more. it is true that this is a chicken and egg dilema. this is why i like the idea where educational institutions attempt to facilitate the obtaining of such certifications. but having a certification without actual experience is as worth as toilet paper. during the last 15 years i have seen numerous indians who have and are proud of their certifications. only a tenth realy lived up to it. the remainder made me question their compentence even more. especially when they openly brag of how they pay a friend like 100£ to do the certification exam on their behalf. its such kind of great achievements that has caused lots of excess unecessary headaches in the industry. a colleague of mine has just been lately traveling to various destinations across the world working in customer environments. he was personally picked to do this because he as one person was able to accomplish alone what 14 indians with equivelent "training and certification" were not able to do. he delivered quality and in time.

the challenge with selflearning is the learning curve pace. for others it can be short, for others very long. there is no consistency with that. thru profesional training and certification, you can expect a level mastery because you are being mentored and absorbing at a higher more intensive rate. IT has various fields and opensource is present in most of them. but opensource does not have ideally composed training material for each or across several fields. there always is a room for improvement whichever way you look at things. a chicken and egg dilema. i wish that commercial products were as open as to what you refered to with bugzilla and the works but believe me when i say of what i have seen, not even big companies are able to oblige developers to document 100% all that they coders to. for example, with a product X, they might release an upgrade patch and document about 20 things fixed in it while in truth, 20 is what they bothered to publically report and under the hood they have done so much more. this has really pissed me of. its because of such human behavior that even tools such as bugzilla can not defeat. my means to overcome these is selfconfidence on my troubleshooting and a network of people with and thru whom i can push things forward. with a company xyz, i identified a bug, the process took over 6month and nothing happened. then i remembered that i had a friend who works in the same department. i contact that person and in two weeks i had the patch that i needed. what was a surprise was that even thou the company has "internal open KB", they had a closed bugzilla accessible only by a few and another unofficial kb/bugzilla that the developers used in between of which no one knew but the coders themselves.. hip hip hurraa.....&%#€#"#"%#"#%" got a bit of topic here but i think you got the idea ;)

in regards to openstack, regardless of its challenge and flaws, i still have high expectations for it. i just hope that it wont be the next SNMP. SNMP was supposed to be a common thing among many vendors. the bickering lead to nothing other than the misuse of the protocol killing its actual potential. yes it is in use and yes lots has been done with it. it had a potential the potential to do more.

its usually the small teams that can do more because they have a clear objective, dedication and get version 1.0 completed and take it on form there. in an environment where you have more players and agendas, you are lucky to get version 1.0 released even with half of what was initially planned! such small units/companies are the ones who get bought out because they yeld result. they cash in and then they quit the jobs and everything stays to gather dust. this has happened over and over with opensource and commercial solutions. the big fuss in regards to openstack or other cloud related has more to do than with technology alone. so far the dialog in this thread has been very technology centric.

with the cloud you have several areas that are impacted, the software, the infrastructure and the IT operations. solutions such as OpenStack are infrastructure related, a 1/3 of the whole which alone does not tell the whole story. the big picture includes those three main areas and openstack covers only one of them. yes it has features and promises to do more. but as we know by now, not all features or promises are kept.

the biggest challenge of course is the human factor. it is not easy to accept a change, embrace a change and live with the cahnge. here, there is going to be a change, why so? simply because the maintaining of current contemporary solutions are already classifiable as legacy solutions on the long run. this kind of reminds me of the talks that were going on on why should i port my software form 16bit to 32bit and even to 64bit. wasnt that also a big change? in the more positive direction i beleive...

as i mentioned earlier, there are three distintive areas with the cloud and its transformation.
- the applications
- the infrastructure
- the it operations

in regards to the application, to cut the story short, its about replatforming and porting the solution and as may be required maybe change some of its architecture in order to comply or better fit with the new underlying platforms. think of java, what was java like when it came arround? a huge thing, code once, use on many platform (almost). well, with the transformation, take that a notch up, code once, port to many cloud platforms. look at pivotal or then again at hazelcast. these kind of give you the idea.

in regards to infrastructure, yeah there is lots here, maybe a bit saturated as well but still with lots ahead. the key play here is, to abstract the resources, put the resources into pools and automate and orchestrate. this is where you will see lots of devops activity. whether it is openstack, ceph or viper, the concept yet remains the same.

the dejavu that i get sometimes gives me headaches. i remember 15 years ago tweaking my own automation and orchestration to achieve what is a trend now. i obliged developers to use virtual machines for their java development. i reset it every night overwriting everything from a gold copy. they hated me for that but look at things today and what is happening. you no longer have the fixed devbox. you deploy one when you need it and let it expire after that. i made use of dfs and terminal server where i obliged people to use corporate productivity tools thru that. i had control on the data and how it was accessed. what is the first generation of VDI doing today? just about the same thing. second generation of VDI today sources applications from across platforms.

a bit headache with the infrastructure often has been compliance. there are over 4000 security compliance standards and requirements in europe alone. i do not know of any single product solution that can qualify for all of them. this alone is causing a headache with what the cloud could be and where the data resides or how it is accessed. or then again what about backup and restore solutions, do you want to only backup or do you want to restore or both? and where do you want to backup form and where do you want to restore to? things can get complicated. because of such things, things in the cloud infrastructure will be bundled. for example you order a silver level virtual machine. that would include average storage, sufficient computing, basic backup and restoration and minimum auditing and monitoring, you do not have to select these separatly. they are prebundled and end of story. or then again you have a security access control application that monitors where people walk arround in the bank. you need fault taulerance, high availability, security compliance, zero unplaned downtime, backup, high security auditing, etc in the old days, that would cost you lots of man hours to design, implement and maintain. now you just select a platinum or gold level virtual machine with a preinstalled "apache" for example. it has all the features required at the tick of the box.

in regards to it operations, that is where the big change is occuring as well. how to tackle the weakest link in a chain? the human factor is what i am refering to. well bundle stuff, lay out simplified and standardized processes and workflows. etc. i remember having worked for a customer who used lots of man hours to emulate what today can be done thru defined processes and automation. 10 years ago when i dared to suggest a simplification, i was decapitated simply because the ideas were job position threatening. as an animal survival instict, people always perceive the "danger" first fending of without giving things more thought. in this case people wanted to secure their jobs and way of life. they were not ready to embrace a change. today, if you are not ready to embrace a change, regardless of your competence or character, you will loose your job. here there is a big misunderstanding. what simplification and standardization of workflow and processes is not to cut down on head count but more about enabling the staff to do more. cover more and yeld more. i dont know if you have ever read the comic book "asterix in rome", there is this one scene i never forget. that is when asterix goes to get a document and then faces tremendous amount of beaurocracy. this is what current IT world is like. there is no ill intention, but the various branches of the beaurocracy are not collaborating or supporting each other hence making IT work slow an hard to work in large organizations. a little like once working for a customer, the customer requested access to a nas share. the ticket was dispatched, job done, closed and signed of. it didnt work. so they reopened another ticket for the same thing. this time i was able to map, but i wasnt able to browse. then another ticket was created. i was able to browse and was not able to write. then another ticket was opened becuase i could delete files. makes sense, eh? who is to blame for that.. reminds me of the youtube satire where akhmed the xxx xxxx says "i-kill-you". now that i once again went of track back to topic.

the cloud, the cloud in regards to it operations is to standardize, in my previous example, for example, grant user access request would include the other attributes not to have to recreate a dozen separate tickets and request to access a nas share. certain requirements will be bundled into ready processes and packages to cut down on these. "i need a mailing distribution list", instead of being your own architecture, project manager, delivery manager, implementor, etc. a predefined tick-in-the-box workflow would exist as a template eliminating tidious labour and wasted man hours.

lets be realistic here, private clouds will exist but in not the same abundance as to the resident IT hardware that exists today. there are various reasons for this of which the most important are:
- compliance
- legacy apps
- impact on business
- network infrastructure reliability
- political impact
- natural disaster impact
- etc

legacy app, it could be that a legacy app is cheaper to run in your office than ont he cloud, hence a mini cloud in your office.

in some cases, data "can not leave the building", hence you will need to have a private infrastructure, hence a private cloud something that an be runing for example on convereged solutions such as, nutanix, simplivity or evo:rail. they are small enough to maintain local operations and flexible enough to use locally and connect to a public or remote private cloud.

in some cases (which reminds me of a project 20 years ago where they wanted to remotely monitor and control beverage botling in africa in europe), disregarding network limitations, its one thing to monitor and control remotely, but what if something went wrong? the production line stall and you loose dollars by the hour. in such a case or another that i know of, the cost of such a stall was 30m$/hour. so no, you will not be handling such operations remotely for the next 10-15 years!

if you want to go all public cloud, which woul work for most, you are fully dependent on 99.999% uptime reliability from your infrastucture. can you guarantee that from your local telcom? i can not guarantee that anywhere in europe! just last summer i laughed at the irony where a local company was selling home security solutions based on Mobile tech, it would send you alerts vial sms, mms etc. then we have a long term elecrictal shortage, over 30min. even my work cell phone was down!! nothing was working! this might be ok for some businesses, but not for many. so this would oblige to have a small private cloud to maintain operations during such outages.

politics is a sensitive topic but a world we have to live with. look at russia and ukrain, imagine that you had a DC or used public services from a company residing in the krim. what kind of impact would that have? a nastyone i guess.

what if you have disaster in the area? believe it or not, even big hosting companies are failing with this topic. it was just this week where i heard of an incident where there was a water leakage in a building that the hosting company had leased facilities from. the building maintainer decided to shutdown all electricity in the building not knowing what the isp does there. there went the primary electricity source and shortly after, the backup power. but the backup power wasnt of much benefit because the building cooling system was initially shut which caused an overheat killing the datacenter long before the secondary power ran out. the irony.. all because of a pipe leak and the lack of organization and communication between the landlord and the tenant.

look at solutions such as what riverbed provides, this is part of what the future could be like. in short whehter it is such or a small "cloud" in your office is a case specific question. a huge percentage will be consolidated to the public cloud where you require only the internet access. but then you would plan your business redundancy in accordance as well. if not then the insurance company wont compensate for loss of revenue or business if you do not do your homework yourself. always be up to standards and compliant.

one thing to remember about the cloud is that one of its key characteristics is elasticity. you could have a single vsphere or xen instance in your office yet you might need every once in a while to host more vm than what fits into it. it might not be feasable to install another due to cost or other reasons, what do you do? well, connect it to the cloud. you could elastically push non-critical vm from your box to another hosting company cloud and host the additional vm that you require. and then retrieve them when done and pay as you go. practical, eh?


i once again ran out of time to rant on stuff and neglected to comment on samers enquiry regarding xAAS. sorry dude, maybe tomorrow ;)

now to sleep and dream of beer that i will consume this weekend.. maybe..
wow BL, that's pretty interesting.

Do we need faster CPUs?
First a small aparte about 20Ghz CPUs. If I were to summarize (very) briefly what you're saying it's that we don't need them because all apps are multithreaded or distributed. @arithma is saying that apps are distributed because CPU performance has been capped for a long time.

It'd be interesting to list potential use cases that would benefit from faster CPUs. For the sake of exercise, let's imagine that we invented infinite power CPU. It can instantly compute everything in its scheduling queue. What are the programs that would benefit from that?

Obviously, that would exclude any kind of networked programs, like Cloud infrastructure software or website applications. These will always be bound by IO and we can be sure they won't benefit from it. But what about computation heavy processes like:
  • Scientific calculation like generating digits of Ï€ or modelizing molecules.
  • Data mining
  • Natural language processing
  • Video/game rendering
One can argue that these fields are now being run by distributed programs because of the limitations of the CPU. Remember that distributing computation introduces a heavy (and costly) overhead that you'd often be delighted to remove. Ever try to run Hadoop on Big Data that turned out not to be that Big after all?

Can you think of other fields?

Cloud stuff
@BashLogic: You summarized pretty well my ideas and give insight into interesting products and trends. Thank you for sharing the anecdotes, they mirror very well my experience and the ones of my colleagues/people-in-the-industry.

There are 2 products (open source, of course) I wish to mention that I think might be of interest here.
i remember 15 years ago tweaking my own automation and orchestration to achieve what is a trend now. i obliged developers to use virtual machines for their java development. i reset it every night overwriting everything from a gold copy.
Check out Vagrant. Think of it as a nice CLI wrapper on top of your hypervisor, but it's even more than that. It was developed by devops for devops, and we use it exactly as you describe it in this quote. I like it because of its integration with Puppet (it supports all sorts of provisioning, like Chef, Ansible or even good ol' shell script). Trust me, if your devs had this, they wouldn't have hated you back then.
in regards to it operations, that is where the big change is occuring as well. how to tackle the weakest link in a chain?
I'm currently playing with OpenShift. It is marketed by Red Hat as a PaaS, but from what I've seen so far it does even more than that. It's an app delivery system that behaves more or less like your description of operations in the Cloud. I know people working on integrating OpenShift and OpenStack. It's going to be big :o)
i wonder if anyone is reading or thinking of we have been talking about so far. so far it has been cloud related, maybe this could be retitled as cloud foresight. it is a ridicule if Rahmu and I are about the only ones who pitch in. Arithma has been brave and his pitch has been welcome. if anything his pitches have indirectly emphasized some of the things that are being stated.

is there a possibility to add poles to this thread or something?
i took the liberty of retitling this thread as cloud foresight since most of this thread if not all has been cloud centric.
BashLogic wrotei took the liberty of retitling this thread as cloud foresight since most of this thread if not all has been cloud centric.
You can't expect a discussion to get more participants by narrowing it down :)
Perhaps you need to pose more questions as well...

Let me place some:
- Adobe has been partnering with giants (Microsoft, Google) and driving to provide their suite over video. This both opens new businesses and endangers software 'ownership.'
- Have software engineers become the bottleneck in the production pipeline for large entities?
- With all the advent of this new 'cloud,' what have the main benefits to the users been? Curse of choice?
- Lower barrier to entry: The noise has increased. Is it more difficult to be a software engineer now or easier?