Smart Cities in the age of “the cloud”

servers

In the last months, large companies such as Google, Amazon or Microsoft have significantly shifted their businesses towards the cloud. By doing so, they confirmed a currently existing strategy or else defined new company policies


Information technologies are a fundamental pillar for smart cities. If it were not for communication networks or the data processing capacity shown by data centers, we would still be “chained up” to desktop computers, and mobile networks or the Internet would not even exist. Nevertheless, recent as such technological developments are (Internet use was launched back in the 1960s), we are currently quite dependent on them, partially because we made technology a fundamental pillar of our society. Healthcare, education, science, economics and even social relationships heavily rely on technology. Even those people who are not keen on technology find themselves depending on services and infrastructure that would not be possible without ICT development.

 

Internet users per 100 inhabitants ITU

 

 

Technology 3.0. From processors to the cloud, throughout the Internet

Either in a planned way or swept by the series of technological developments based on Moore’s Law and on the increasing refinement of electronic processes, the last decade has been outstanding in terms of swiftness and relevance in the advancement of several technologies: microprocessors, storage systems, communication networks or software development. Cell phones today have a larger processing capacity than a desktop PC had just a decade ago; laptops are more powerful than the supercomputers that were available just several decades ago, and a picture camera has more storage capacity than a mainframe. No matter how you look at it, statistics are overwhelming.cartridge

The innovation process in the technology arena has not been completely straightforward, neither has it been totally planned. To take a case in point, we are in the midst of what we could call a trip back to the origins: at first, mainframes installed in calculation and computation centers belonging to companies and universities were the standard reference in terms of information processing. First came punch cards, then data entry systems and storage in low-capacity systems that depended on tape-supported backup systems. Completely devoted, huge systems were used as working tools. After that, there was a drift towards PCs and laptops, and centralized processing systems were left aside. After all, with the introduction of x86 processor architecture and the formulation of Moore’s law (which states that approximately every two years the number of transistors in an integrated circuit doubles), desktop equipment seemed capable of facing all kinds of calculations and tasks in office IT or computer-assisted design.

In parallel, some computers were connected for the first time through the ARPANET network. A textual interface was used at first, followed by a graphic one when the WWW came into being around 1991. This coexisted for some time with the development of 2G mobile communications technology, which was the first digital one.

Message systems, data transmission or e-mail systems were the initial applications of connected computers, but early on it could be seen that whenever the web was available, it would be much more convenient to offer services that were directly located in servers instead of using them in a local manner. The e-mail was one of the first services in the cloud thanks to webmail, which enabled direct access to the e-mail by using a browser. In fact, Google and Microsoft, besides their search services, started their positioning from Gmail or Outlook, respectively. From then on, other services such as document management and edition were offered, along with online storage.

 

"The cloud is undergoing the same process as that of smartphones and tablets: there has been an increase in programmability and flexibility"

 

Every technological evolution brought about improvements in several aspects, such as connection speed, processing capacity of computers that make up data centers, speed of storage systems or an increase in RAM memory available in systems. Early websites were static. Further on, the introduction of a server-executed code made it possible to add simple dynamic elements on websites, such as visit counters of interactive photo galleries. After that, databases were introduced, as well as the possibility to interact with them in order to perform searches, introduce data and manage content in an increasingly complex manner.

Later on, additional services were introduced gradually. Nevertheless, they worked unilaterally, and no possibility was offered for intervention by other companies to build their own services in data center servers. These included image processing, translation to several languages, CRM applications, contact management or storage systems with advanced functions for sharing or even editing files.

A parallel may be drawn with the evolution of mobile devices. Early Internet-connected terminals offered little more than a browser to access websites. Any other function was pre-established in the device firmware (defined by its hardware) and there was no chance for users to modify it. Later on, as technology became more and more refined, optimized and flexible, installing applications in mobile devices was actually possible. Before Apple or Google, Blackberry or Windows Mobile made room for that in a somewhat hand-made manner, using repositories that contained all programs available at the time. Nevertheless, this was neither easy nor straightforward, and it was mostly considered an option for companies rather than for a broad audience.

To date, smartphones and tablets are a model for programmability and flexibility. Something similar is happening with the cloud: there is a shift from a rigid, inflexible use of the cloud -involving pre-installed services and applications that depended on a given hardware, as well as inflexible configurations- towards a situation in which clients may install their own systems and configurations in a flexible, dynamic manner. Virtualization has been one of the driving forces behind that change, as a given datacenter may be used to execute several Linux distributions as well as those from Windows Server, or even services and applications such as databases. None of the aforementioned requires particular technical knowledge.


In the Google Cloud Platform Live event held on November 4th last year, Greg DeMichillie, Director at Google Cloud Platform, underscored how valuable the evolution of the cloud is by stating the following: “we generally use new technologies like those we already know before we take advantage of its full potential.” In a similar fashion, he added that “a datacenter is not a set of computers, it is one computer only.”

 

Cloud is the new normal

Andy Jassy, the Vicepresident of AWS (Amazon Web Services) stated the following during the Keynote in the Amazon AWS event re:Invent (https://reinvent.awsevents.com/) that was held on November 14th, 2014: “Cloud has become the new normal”. Which means that the cloud has become the norm. The trend has become stronger in the last few years, which has pushed large technological companies to reinvent themselves in order to adapt to the new paradigm, or at least to keep in the lead in the cloud segment. In a recent conversation with Carlos Conde, Evangelist at Amazon Web Services, we got to know the AWS philosophy first-hand, along this trend that somehow became what it is now because of companies such as AWS. Extremely popular services such as Dropbox or Netflix require the infrastructure offered by suppliers such as Amazon itself. Other companies offer specially devoted services such as Salesforce for CRM, others provide web hosting services or content platforms for other companies such as LG (for their smart TV devices). Snapchat works on Google Cloud technology, and even governments themselves are starting to use Cloud technologies to offer their services or deploy their own infrastructure.

Companies are facing the following issue: deciding the application scope of the cloud, whether it be private, public or both (a hybrid model). Nevertheless, thanks to the progress being achieved in such aspects as security, software-defined networks, virtualization or bundling of applications, limits between public and private clouds are becoming fuzzier, and it is now possible to use hybrid structures as well as public-only or private-only structures, depending on needs.

 

"Large technological companies had to reinvent themselves to adapt to the new paradigm: the cloud has become the new normal"

 

Differences involving service models are also becoming increasingly blurred. Three categories were traditionally established, including SaaS (software as a service), IaaS (infrastructure as a service) and PaaS (platform as a service). SaaS is devoted exclusively to private users or specific uses. This is the model used by Gmail, for instance, where access is only allowed to a service clearly defined by the company offering it (Google in this case), and users are not allowed to introduce new functions beyond those offered by the access interface.

In spite of that, PaaS and IaaS have traditionally been differentiated. PaaS makes it possible to abstract underlying hardware layers in order to offer fast development environments, mostly web services linked to databases and business intelligence systems, or even other applications and services, but avoiding the complexity of APIs or complex debugging processes. IaaS provides access to hardware: network communications and virtualization, even though technologies such as application containers are finding their way: they make it possible to make some functionalities (until now available at the infrastructure-level only) platform-level. From platform level, it is possible to interact with infrastructure beyond hardware abstractions set by Cloud services providers.

 

Trends in “cloud computing” 
The cloud has become a trend in itself. No one would now think of using a smartphone without an Internet connection, to take a case in point. This is due to the fact that an overwhelming amount of smartphone uses depend on online services and web applications, and such services and applications are hosted in platforms and services on the cloud. In the last few years, companies that were not yet fully involved in the cloud have become more and more aware of that. That was also the case for companies which were actually involved in the cloud arena, but needed further work to offer cloud service modalities that included every possible category and area. As always, expectation curves are usually fulfilled, and Gartner’s seems to be a good reference to stress the relevance of current trends.

 

 

Cloud Services

 

Investment

According to reports by IHS Technology, investments on cloud infrastructure and services will reach 174.200 million dollars by 2014, and they are expected to reach 235.100 million dollars by 2017. This involves a 20% increase in 2014 as compared to 2013, and it also implies investments in 2017 being three times as much as those in 2011 (78.200 million dollars). Other consultant firms, such as Gartner, agree on the forecast and confirm the interest in the cloud as a choice to offer services and applications either in the public or private areas.

 

Companies

Traditionally, key companies in the cloud area remained in a nearly static mode since the very early development of cloud services, back in 2006, as Amazon joined the group. Nevertheless, AWS is now witnessing how other companies are rising their figures as a consequence to their working on the subject, such as Microsoft -led by their new CEO, Satya Nadella- or IBM that continues its strategy as an integral solution provider. Companies such as Google do not need to start from scratch, they rather move their cloud strategies along new directions, as do VMWare, Cisco or Teradata. Others strengthen their offers, such as Salesforce (www.salesforce.com/es/).

 

Parallel computing and virtualized graphics 

IaaS providers such as Amazon Web Services are starting to offer massive parallelism technologies as part of their services. For more than a year, AWS has been offering computation services based on the GRID technology offered by NVIDIA, and they opened the doors to HPC (High Performance Computing) and GaaS (Gaming as a Service), among others. To take a case in point, on November 13th last year NVIDIA announced GRID availability for gaming as a service in the USA (which would later on be expanded to Europe and Asia), and it installed systems in the cloud that contained its GPUS virtualized technology and low latency.

 

IaaS, PaaS and containers 

Convergence between IaaS and PaaS is a new trend made possible by the introduction of containers such as Docker (https://www.docker.com/). Virtualization made it possible to abstract hardware, so that a company could use the same server to offer several stages of their services. For example, in Microsoft Exchange a machine is required for every given client. By making them virtual, hardware resources may be taken advantage of in a more efficient manner. Google, on the other hand, uses containers that work by deploying abstraction at the operating system level. In practice, containers make it possible to move PaaS closer to IaaS, which is great when it comes to becoming adapted to another current trend: using the cloud at the application level, not only for infrastructure.

- From the cloud for infrastructure purposes to the cloud “for applications”: 

The cloud has traditionally been employed as a way to reduce infrastructure costs or to offer software as a service, but companies are taking the “application philosophy” to the cloud. In order to do so, they need to face massive user demands, and containers have proved themselves to be much more efficient than virtual machines for that purpose.

 

SDx or Software Defined Everything

As a way to maximize the use of infrastructure, companies have coined the concept “Software Defined”. Network devices were the first of them all: they were once hardware-defined and could only perform some given communication management tasks, and changes in those were determined and limited by firmware updates. To dates, functions of communication systems are becoming software-defined, which brings about flexibility and elasticity. The concept is becoming available to data centers, as shown by the concept “Software Defined Data Center” or SDDC. It essentially involves extending the virtualization concept to storage, network connectivity and security. Companies such as VMWare are leading the way, as well as Cisco, just to mention two widely-known companies.


Caption: software-defined networks include advanced processors such as Intel Xeon, which are capable of processing network traffic using software and adapt it to the requirements, in terms of security, access and quality of service that are determined by cloud services and applications.

 

Big Data

Big Data is one of the key elements in the development of cloud computing, and the other way round: Big Data may reach small and medium enterprises by means of cloud computing. Examples such as BigQuery by Google or Watson by IBM are becoming useful tools as other companies link their analytics or display tools to these computational resources . Even “traditional” companies focusing on bare metal solutions for companies are moving their data analytics service offers to the cloud. Teradata would be an example of the aforementioned, as they are starting to build data centers to offer their services to companies that are not big enough to face such a large investment in infrastructure.

 

Data as a service

 
This leads us another trend: data as services. Data are the new raw material in digital economy, and companies such as Oracle are already offering that DaaS modality as a possible diversification of their services for Cloud platforms. Fi-Ware (www.fi-ware.org/) is another example of an initiative where data play a relevant role as a support for building services and applications in the smart city context. Fi-Ware is an European project led by Telefónica, a company that is also focusing on the cloud in such areas as IoT or even as a carrier (Tuenti).

 

 IoT and the cloud

The Internet of Things is another fundamental element of the cloud. From a different point of view, cloud computing makes it possible to move IoT to a much wider scale than that reached when using the traditional approach: ad hoc infrastructure. Connecting sensors to a cloud platform makes data immediately available for their processing. Libelium (www.libelium.com) is an example of this kind of connections between sensors and cloud solutions (esri, sentilo, Telefónica, MQTT, ThingWorks and Axeda); Telefónica would be another example, with Thinking Things (www.thinkingthings.telefonica.com), to name a few examples close to home.

Apart from that, companies such as Amazon, Microsoft, IBM or Google are currently including specific approaches for IoT in their cloud solutions. Google calls it the “physical web” ; Amazon has a specific portal to provide solutions in the field (http://aws.amazon.com/es/iot/ ) where clients may connect devices to applications through data. In the latest TechEd event by Microsoft, which was held in Barcelona, several aspects were discussed, such as Windows 10 for IoT and new functionalities for their cloud platform, including Stream Analytics and Datala Factory, which are specially designed to process huge amounts of unstructured and structured data in real time.

 

Smart Cities

Within the context of all the aforementioned trends, smart cities are becoming an ideal application are for Cloud technologies, and the cloud is a fundamental driving force in their development.

 

Microsoft Accelerating Cloud

 

Smart cities in the age of the cloud 

The deployment of Smart Cities heavily relies on the advances achieved in cloud technologies. A smart city depends on open, scalable, cross-sectional systems that may offer real-time services and are capable of analyzing data in the Big Data range.
The concept of “the city as a service” (CaaS) is entering the plans of those in charge of defining smart city models for the next few decades. Nevertheless, standards and architectures should be defined before that, so that services for citizens may be introduced in a fast, comprehensive manner. The faster, cheaper way to introduce them is by means of cloud platforms.

Security is not so much of a concern right now when it comes to hiring services in public clouds. In fact, a few months ago AWS obtained a certification by the United States Department of Defense to handle “sensitive” and classified data. On the other hand, cities are perfect clients for cloud platforms:

  • Economic resources are limited and investment should be cautious and understandable by citizens.
  • Scalable, dynamic systems are required, according to demand. In this case, demand usually fluctuates, and it is sometimes completely impossible to forecast.
  • Tools for data analysis in real time are required; they are expected to handle large amounts of structured and unstructured data gathered by sensors. Big Data processing capacity becomes easier to afford when cloud service providers are hired.
  • Cross-cutting, universal structures are required to connect services and applications in cities all over the world. The cloud is universal and cross-cutting by definition, with interconnected servers in geographic locations all over the world.
  • Sensor network interconnection may be immediate if performed through connectors in already-existing cloud platforms.
  • Introduction of applications and updates for cloud platforms may be organized in a hyerarchical, modular manner if it is done using the cloud instead of using proprietary platforms.
  • It is compatible with the deployment of private clouds, if required by law depending on the country or the geographical area.

 

The fastest and cheapest way for Smart Cities to introduce services for their citizens in a fast, universal manner is by using cloud platforms

 

It is interoperable as long as such characteristic is specified, but interoperability is easier to define when working with cloud platforms.

Open Data are one of the key pillars of smart cities, and cloud platforms are essential to build solutions in this field. No need to go far: Barcelona is already offering an Open data platform to be used by companies, citizens and organizations around Microsoft cloud solutions such as Azure, or CityNext on a higher level, its specific solution for Smart Cities.



Even clouds may fall

One of the issues with cloud solutions is their resilience and robustness. This is more of a risk than a problem, and on many occasions it is only measured using a statistical approach, which determines that a computational system may suffer service failures. Probable time frames determined by statistics are in the 0.06 range for Microsoft services, for example. And they are even several orders of magnitude smaller for critical services. Nevertheless, these are not equal to zero, which means that at a given, non-determined time point, services will not be operative. Out of all the problems with the cloud, the one that affected Blackberry in October 2011 for three days was a turning point for the company, and it seriously challenged the viability of the cloud as a technological solution.

In June 2012, Amazon Web Services -including Instagram, Netflix, Pinterest or Flipboard, among others- were affected by problems in the platform. In April 2011, Sony faced a security issue that involved having to shut down its PSN platform for 23 days. In between, all cloud service providers have suffered failure to some extent at one time or another.

Anyway, this is a statistical matter, and probabilities are becoming gradually smaller, to the point that they are not significant at all in terms of the viability of services offered. Inherent redundancy in infrastructure development of large cloud companies makes it possible to rebuild a service and its associated data in a matter of minutes or even seconds.

 Availability zones

 


What is the cloud?

“The cloud” is such a widely used expression that forgetting about what we are really talking about -specific, tangible elements- becomes easier than desired. The expression “cloud computing” may easily be left aside, but in fact we are talking about large-scale data centers all over the world. They are interconnected by networks working at a extremely high speed, and their processing capacity as a whole is so big that it would enable Amazon to rank among the first 64 positions in the supercomputer Top 500 as established in the top500.org list.

The cloud stands for the outsourcing of processes and data. Whenever the CPU, the memory or the storage system that are involved in the execution of a given application, process or service are physically located outside the device we have on our hands or right in front of us… this is the cloud, to store data and files or to execute applications or services.

Data centers are server farms, and they may host tens, hundreds, thousands or even hundreds of thousands of them, depending on how they are designed and what uses they are to be devoted to.

Private clouds already exist: they were created for companies and organizations to use, and their settings were adjusted according to that fact. Public access to them is not allowed, and they are designed for a specific use of some given applications and services. Public clouds also exist, managed by companies that make their profits by renting their infrastructure, their platforms or their services.

Hybrid clouds are those in which both user modalities may be combined by clients: some cloud aspects may be managed by the client, whereas other aspects are managed by the cloud company.

 

We use our own and third-party cookies to enable and improve your browsing experience on our website. If you go on surfing, we will consider you accepting its use.