How to Get Ready for Kuohu by Eeva and Ella from Industryhack

Luova Aalto is excited to announce the fantastic mentors that will be there to help and support you throughout the event. Let us introduce you to Eeva Siika-aho and Ella Ronkainen from Industryhack…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




A Brief Introduction to Edge Computing and Deep Learning

For Deep Learning systems and applications, Edge Computing addresses issues with scalability, latency, privacy, reliability, and on-device cost.

Welcome to my first blog on topics in artificial intelligence! Here I will introduce the topic of edge computing, with context in deep learning applications.

Now, before we begin, I’d like to take a moment and motivate why edge computing and deep learning can be very powerful when combined:

Deep learning is becoming an increasingly-capable practice in machine learning that allows computers to detect objects, recognize speech, translate languages, and make decisions. More problems in machine learning are solved with the advanced techniques that researchers discover by the day. Many of these advanced techniques, alongside applications that require scalability, consume large amounts of network bandwidth, energy, or compute power. Some modern solutions have been around to address these concerns, such as parallel computing with a Graphics Processing Unit (GPU) and optical networks for communication. However, with an explosive field like deep learning finding new methods and applications, a entirely new field is being fueled to match and possibly surpass this demand. Introducing: Edge Computing.

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. Take for example the popular content streaming service Netflix. Netflix has a powerful recommendation system to suggest movies for you to watch. It also hosts an extraordinary amount of content on its servers that it needs to distribute. As Netflix scales up to more customers in more countries, its infrastructure becomes strained. Edge Computing can make this system more efficient. We’ll begin with the two major paradigms within Edge Computing: edge intelligence and the intelligent edge.

More connected devices are being introduced to us by the day. Our phones, computers, tablets, game consoles, wearables, appliances, and vehicles are all gaining varying levels of intelligence — meaning they can communicate with other devices or perform computations to make decisions. This is edge intelligence. You might ask why this is important at all, but it turns out that as our products and services become more complex and sophisticated, new problems arise from latency, privacy, scalability, energy cost, or reliability perspectives. Use of edge intelligence is one way we can address these concerns. Edge intelligence brings a lot of the compute workload closer to the user, keeping information more secure, delivering content faster, and lessening the workload on centralized servers.

Much like edge intelligence, the intelligent edge brings content delivery and machine learning closer to the user. Unlike edge intelligence, the intelligent edge introduces new infrastructure in a location convenient to the end user or end device. Let’s take our Netflix example again. Netflix has its headquarters in California, but wants to serve New York City, which is almost 5000 kilometers away. Combine latency with the time it takes to compute a recommended selection of movies for the millions of users, and you’ve got a pretty subpar service. The intelligent edge can fix this!

Instead of having an enormous datacenter with every single Netflix movie stored on it, let’s say we have a smaller datacenter with the top 10,000 movies stored on it, and just enough compute power to serve the population of New York City (rather than enough to serve all of the United States). Let’s also say we’ll build 5 of these data centers — one for each borough of New York City (Manhattan, Brooklyn, Queens, Bronx, Staten Island) — so that the data center is even closer to the end user and if one server needs maintenance, we have backups. What have we just done? We’ve introduced new infrastructure, albeit with less power, but just enough to provide an even better experience to the end user than by using the most powerful systems centralized in one location. The idea of edge intelligence is scalable too, we can imagine this on a country-wide scale or on the scale as simple as a single warehouse.

For the remainder of this blog, we’ll dive a bit deeper into Edge Computing paradigms to get a better understanding of how it can improve our deep learning systems, from training to inference. I will also briefly introduce a paper that discusses an edge computing application for smart traffic intersection and use it as context to make the following concepts make more sense.

From the paper Abstract: Smart city intersections will play a crucial role in automated traffic management and improvement in pedestrian safety in cities of the future. They will (i) aggregate data from in vehicle and infrastructure sensors; (ii) process the data by taking advantage of low-latency high-bandwidth communications, edge cloud computing, and AI-based detection and tracking of objects; and (iii) provide intelligent feedback and input to control systems. The Cloud Enhanced Open Software Defined Mobile Wireless Testbed for City-Scale Deployment (COSMOS) enables research on technologies supporting smart cities.

Smart cities are perhaps one of the best examples to demonstrate the need and potential for edge compute systems. In this application for traffic intersections, we could imagine that there are some challenges to address as we move to a more autonomous future:

This hopefully stimulates some ideas for how state-of-the-art deep learning solutions have limitations in an application like smart cities. Read on to see how edge computing can help address these concerns!

Five essential technologies for Edge Deep Learning:

Now, let’s take a closer look at each one.

Applications on edge comprise of hybrid hierarchical architectures (try saying that five times fast). This architecture is divided into three levels: end, edge, and cloud. Here’s an example from the paper demonstrating a real-time video analytic.

At the cloud level, we have our traditional large deep neural network (DNN). It also [alternatively] contains a majority of a network that is shared between the cloud and the edge. At the edge level, we have both a minority of the network shared with the cloud alongside a smaller, trained deep neural network. Finally, at the end level are our end devices. These can be sensors or cameras for collecting data, or a variety of devices for observing results and information. What’s important to note here is the collaboration between the cloud and the edge. Edge infrastructure lives closer to the end level. Recall that the edge has less compute capability, so hosting our large DNN there will likely give us poor performance. However, we can still host a smaller DNN that can get results back to the end devices quickly. We can additionally have an early segment of a larger DNN operating on the edge, so that computations can begin at the edge and finish on the cloud. This is a solution that would reduce latency by removing the bottleneck at the edge level, and reducing propagation delay to the cloud level.

The best DNNs require deeper architectures and larger-scale datasets — thus indirectly requiring cloud infrastructure to handle the computational cost associated with deep architectures and large quantities of data. This requirement limits the ubiquity of the deployment of deep learning services, however.

Optimization for model inputs, e.g., narrowing down the searching space of DL models.

The figure above depicts an inference system entirely on the edge — no cloud at all! Let’s break it down. Our camera system on the far left is placed near a common pedestrian walkway, let’s say it’s to help us find a child that was separated from their parent. To accomplish this task, our DNN must be capable of detection of humans as well as recognition, to make sure we find the right child (their parent shares a photo so we know what to look for). Sounds like a job for the cloud, right? What if, instead, we used an edge platform specifically for finding the Region-of-Interest (RoI). This yields a much smaller space that we need for object recognition, now that less-relevant parts of our image have been removed. We can feed the reduced search space to a second edge platform that performs the inference for matching the child in the photo provided. Therefore, this distributed system successfully completes the same task that normally would be allocated to the cloud. The ability to deploy a system like this dramatically increases the potential for system deployment in places further away — or completely disconnected — from the cloud!

So far we’ve talked about how we can stretch a DNN architecture across cloud, edge, and end devices. This is only one direction we can approach for deploying edge computing systems. Truly, the design, adaptation, and optimization of edge hardware and software are equally important. We will discuss that in this section.

Communication and computation modes for Edge DL.

A variety of concerns may rise regarding training. For example, for real time training applications, aggregating data in time for training batches may incur high network costs. For different applications, merging data could violate privacy issues. While niche, these legitimate concerns justify an exploration into end-edge-cloud systems for deep learning training.

Distributed training has been around for some time. It can be traced back to a proposed edge computing solution to solve a large scale linear regression problem in the form of a decentralized stochastic gradient descent method.

Distributed DL training at edge environments.

This figure shows two examples of a distributed training network. On the left, it is the end devices that train models from local data, with weights being aggregates at an edge device one level up. On the right, training data is instead fed to edge nodes that progressively aggregate weights up the hierarchy.

Federated Learning: Federated Learning (FL) is also an emerging deep learning mechanism for training among end, edge, and cloud. Without requiring uploading data for central cloud training, Federated Learning can allow edge devices to train their local DL training, Federated Learning can allow edge devices to train their local DL models with their own collected data and upload only the updated model instead.

As depicted in the figure below, FL iteratively solicits a random set of edge devices to 1) down- load the global DL model from an aggregation server (use “server” in following), 2) train their local models on the down- loaded global model with their own data, and 3) upload only the updated model to the server for model averaging.

Federated learning among hierarchical network architectures.

Federated learning can address several key challenges in edge computing networks: Non-IID training data, limited communication, unbalanced contribution, and privacy and security. See section 7, subsection B in the paper for more details for how FL achieves this.

DNNs (general DL models) can extract latent data features, while DRL can learn to deal with decision-making problems by interacting with the environment. With regard to various edge management issues such as edge caching, offloading, communication, security protection, etc., 1) DNNs can process user information and data metrics in the network, as well as per- ceiving the wireless environment and the status of edge nodes, and based on these information 2) DRL can be applied to learn the long-term optimal resource management and task schedul- ing strategies, so as to achieve the intelligent management of the edge, viz., or intelligent edge.

This leaves significant room for open-endedness — where we can apply DNNs or DRL for resource management such as caching (i.e. reducing redundant data transmissions), task offloading, or maintenance. See the attached table from the paper to see how this may be used.

This blog covers use cases of edge computing for deep learning at a surface level, highlighting many applications for deploying deep learning systems as well as applications for metrics and maintenance. If these ideas resonated with you, you might agree that this opens the avenue for more deep learning applications like self-driving cars or cloud-based services like gaming or training DNNs entirely offline for research purposes.

With the rising potential of edge computing and deep learning, the question is also raised as to how should we go about measuring performance of these new systems or determining compatibility across the end, edge, and cloud:

On top of this, the introduction of edge hardware comes with its own unique challenges. Some examples are:

Lastly (and before the details get too confusing!), we might also be interested in practical training principles at edge. The idea here is that want to have standards for how our system trains. A lot of these questions are open-ended, meaning a lot of solutions can (and cannot) be used to address the problem. Some questions that can serve as examples to ponder on: Where does training data coming from? Say we want to deploy Federated Learning model. To do this, that means the cloud is not a delegator of data. Is data collected on site? Are edge nodes communicating over the blockchain? There may be synchronization issues because of edge device constraints (i.e. power), and so on.

References:

Add a comment

Related posts:

Productivity as a DIY project

As I was wrapping up my time at Fullstack Academy, the future seemed bright. I had an in-demand skill set in my arsenal and after three months of drilling algorithms and learning best practices in…

What are the Fundamentals of Reinforcement Learning

To study Machine Learning while minding my two-year-old son makes things a little bit more interesting. I imagine him like a super-intelligent robot (I mean: super, super intelligent) right there…

IT Staff Augmentation vs. Outsourcing

Modern businesses are blessed with the flexibility and convenience of sharing their work with third party service providers. You can outsource selected processes (like IT) or hire contractual team…