Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Simulation of Fog Computing for Internet of Things (IoT) Networking

Paper Type: Free Essay Subject: Computer Science
Wordcount: 5654 words Published: 23rd Sep 2019

Reference this

Simulation of Fog Computing for Internet of Things (IoT) Networking


Regardless of the expanding utilization of cloud computing, there are still issues unsolved because of inherent issues of cloud computing such as unreliable latency, lack of mobility support and location-awareness. By providing elastic resources and services to the end users at the edge of the network, Fog computing can address such issues. Cloud Computing is more about providing resources conveyed in the core network. This venture exhibits the idea and simulation of Fog Computing using Cisco Packet Tracer (Networking Perspective) & Amazon AWS (Cloud Platform). Cisco Packet Tracer is a network simulation tool and Amazon AWS is Cloud Computing Platform which can simulate the Internet of Things (IoT) nodes that are connected to a core network passing to the Fog network. The size and the speed computing of the Edge network can be optimized.

                                  TABLE OF CONTENTS

 Abstract …………………………………………………… 3


1. Introduction…………………………………………………. .5

2. Architecture & Implementation……………………………. 9

2.1.  Cisco Packet Tracer………………………………………..9

2.2. Amazon Web Service………………………………………10

2.3. Simulation in Aws Platform………………………………..11

2.4. Fault Tolerance Environment………………………………16

3. Results……………………..…………………………………. 19

5. Conclusion…………………………………………………….29

6. Future Scope…………………………………………………..30

7. References…………………………………………………… 33

    Fog Computing is a distributed computing paradigm that acts as an intermediate layer in between Cloud Data Centers and IoT devices and sensors. It offers compute, networking and storage facilities to the Cloud-based services that can be extended nearer to the IoT devices and sensors. The concept of Fog computing was first introduced by Cisco in 2012 to address the challenges faced by IoT applications in conventional Cloud computing. IoT devices and sensors are highly distributed at the edge of the network along with real-time and latency-sensitive service requirements. The Cloud data-centers are geographically centralized and they often fail to deal with storage and processing demands of billions of geo-distributed IoT devices and sensors. Hence, the congested network, high latency in service delivery, poor Quality of Service (QoS) are experienced.

                                       Fig 1: Fog Computing Environment

Typically, a Fog Computing environment is composed of conventional networking components such as routers, switches, set top boxes, proxy servers, Base Stations (BS), etc. and are placed at the closer proximity of IoT devices and sensors. These components are furnished with different computing, storage, networking, capabilities and can bolster benefit service-applications execution. Subsequently, the networking components empower Fog computing to make vast geographical dispersions of Cloud-based services. In addition, Fog computing facilitates location awareness, mobility support, real-time interactions, scalability and interoperability. In this way, Fog computing can perform productively as far as service latency, power consumption, network traffic, capital and operational expenses, content distribution. In this sense, Fog computing better meets the necessities as far IoT applications contrasted with an extensively utilization of Cloud computing.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

With the start of Cloud computing, computation technology has entered to another time. Numerous computation service providers including Google, Amazon, IBM, Microsoft, and so on are at present nurturing this popular computing paradigm as a utility. They have empowered cloud-based services such as (IaaS), (PaaS), & (SaaS)  to handle various enterprise related issues at the same time. However, most of the Cloud data centers are located centrally and are situated far away from the proximity of the end users. Consequently, real-time and latency-sensitive computation service requests to be responded by the distant Cloud data centers regularly persevere through huge round-trip delay, network congestion, service quality degradation. To determine these issues other than centralized Cloud computing, a new concept named “Edge computing” has recently been proposed.

The fundamental thought of Edge computing is to convey the computation facilities closer to the source of the data. It empowers data processing at the edge network. Edge network essentially comprises of end devices such as mobile phones & smart objects and edge devices such as border routers, set-top boxes, bridges, base stations and edge servers. These components can be outfitted with fundamental capabilities for supporting edge computation. As a confined computing paradigm, Edge computing gives faster responses to the computational service requests and most often resists bulk raw data to be sent towards core network.

                                                      Fig 2: Working of Fog

Fog Computing can also empower Edge computation. However, besides edge network, Fog computing can be extended to the core network as well. More precisely, both edge and core networking components can be utilized as computational framework in Fog computing. Subsequently, multi-level application deployment and service demand mitigation of huge number of IoT devices and sensors can easily be observed through Fog computing. In addition, Fog computing components at the edge network can be set nearer to the IoT devices and sensors compared to cloudlets and cellular edge servers. As IoT devices and sensors are densely dispersed and require real-time responses to the service requests, this approach enables IoT data to be stored and processed within the vicinity of IoT device and sensors.. Fog computing can extend cloud-based services like IaaS, PaaS etc. Due to the features, Fog computing is considered as more potential and well-structured for IoT compared to other related computing paradigms.

  1. Architecture & Implementation:
    1.          Cisco Packet Tracer:

Cisco Packet Tracer is a Network Simulation Tool designed by Cisco Systems that allows users to create network topologies and learn different behaviors of network. The Cisco Packet Tracer allows users to simulate the configuration of Cisco routers and switches using a simulated command line interface.

                                                               Fig: 3 Fog Computing Architecture in Cisco Packet Tracer

In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address to the cloud server with the IP address it takes average of 9ms. When we ping from the same host to the Fog server with the IP address it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.

2.2.            Amazon Web Service:

Amazon Web Services (AWS) is a computing service that provides cloud computing infrastructure.  Amazon Elastic Compute Cloud (EC2) allows us to launch a variety of cloud instances. EC2 allows to have deep system level control of computing resources while running in Amazon environment.  EC2 reduces the time required to boot new server instance.

 Allowing to quickly scale capacity as per computing requirements.  EC2 allows to build and configure instances as per user desired operating system.

2.3.            Simulation in Amazon Web Service Platform:

  1. Testing without Fog Node:

1)     We setup an EC-2 instance in AWS which will act as web server. Figure 4 shows us the deployed EC-2 instance.


                                           Fig:4 Deployed EC-2 instance acting as a web server

2)     Figure 5 shows us the Linux web server page


                          Fig:5 Amazon Linux AMI page from EC2 instance serving as a web server

  1. Testing with Fog Nodes:

  CloudFront is a service in AWS, the role of this service is to distribute static and dynamic web pages to customers. CloudFront sends the data from a global network of data centers. When a customer requests data with CloudFront, the customer is routed to the server location that has the lowest latency, so that data is delivered with top performance.

  •  When  data is already in the edge location with the lowest latency, CloudFront delivers it immediately.
  • When the data is not in that edge location, CloudFront retrieves it from  an HTTP server or any other point where it is been defined and which  has been identified as the source for the data

                             Fig:6 Content Delivery Network (CDN) Architecture


The figure 7 shows us the CDN distribution which is being created for this environment.

                                                 Fig:7 Creation of CDN

                            Fig:8 Webpage accessed via CloudFront

2.4.            Fault Tolerance Environment:

To implement the feature of fault tolerance & high-performance capability in our environment we have used autoscaling and load balancing services in AWS. The architecture is explained below

                 Figure 9: Architecture in AWS for fault tolerance in the AWS environment  

Elastic Load Balancing (ELB) distributes incoming HTTP traffic across many destinations, such as EC2 instances etc. ELB handles traffic in a Availability Zone or across many Availability Zones. ELB has 3 types of load balancers which has the following features like high availability, automatic scaling, and robust security necessary to make applications fault tolerant.

 ELB provides load balancing across multiple domains. Classic Load Balancer is used for applications that are built as per  the EC2-Classic network.

 Auto Scaling service of Amazon helps ensure that right number of Amazon EC2 instances are available to handle traffic for the user application. Collections of EC2 instances, called Auto Scaling groups. Minimum number of instances in each Auto Scaling group can be set and Auto Scaling ensures that group never does not below this limit. Similarly, the max number of instances in each Auto Scaling group can be set by the user, and Auto Scaling does not let it go above this set limit. Auto Scaling has a feature that the instances can be crated deleted increased decreased on user demand.

2.4.1.      Implementation of the fault tolerant architecture

[1]   Classic Load balancer is created using AWS allowing HTTP traffic to flow through the network

                              Fig: 10 Load balancer Creation

[2]   Auto scaling group is created using AWS service setting up 2 instances and other configurations are done in the environment.

                                        Fig: 11 Autoscaling group is created

[3]   The 2 running instances spawned by the auto scaling group

                       Fig: 12 Two running instances

[4]   The Apache Web server which is running on both the instances.

                                         Fig:13 Apache Web Server

[5]   The instance with ip address is terminated to simulate a environment when a fault is occurring in our web-server.


                                                Fig:14 Terminating one of the EC2 Instance

[6]   A new EC-2 instance is created because of the autoscaling group we have configured before. Thus, if any server goes down due to any reason, Auto scaling group will help to ensure the right number of ec-2 instances are present to handle the load in the environment.

                              Fig: 15 Two EC2 Instances are running

  1. Results:
    1.          Cisco Packet Tracer

In Cisco Packet Tracer we have created a network topology with the topmost layer as cloud server, the topology at the middle is the Fog server and the fog servers are connected to the end devices through the switch and routers. We have used the generic Switch and the Routers in our topology. IP address has been assigned to each of the routers, end devices and the server. Static routing has been assigned to router 0 and router 1. When we ping from the host PC (End Device), which is assigned with an IP address to the cloud server with the IP address it takes average of 9ms. When we ping from the same host to the Fog server with the IP address it takes average of 5ms. Hence from the comparison we can conclude that there is less latency in Fog compared to the cloud server.


                                  Fig: 16 Pinging from Host to Cloud Server


                                     Fig:17 Pinging from Host to Fog Server


                             Fig: 18 Traceroute to Cloud Server (


                                          Fig:19 Traceroute to Fog Server (

3.2.  Amazon Web Service Platform:

In AWS we have created the static website and accessed it from India, without the fog nodes to observe the latency. We accessed the website from India, using VPN. The maximum latency observed is from India. When we accessed the website from India then we observed the average latency of 571ms. When Aws CloudFront is configured for the same environment and the same website is accessed from India, we can see the difference in the latencies between the two. The latency is now reduced to 88ms.Thus by using CDN, we can conclude that low-latency can be achieved to access websites from any location in the world.


                                     Fig:20 Ping from India before CDN deployment


                                    Fig:21 Ping from India after CDN Deployment

In the fault-tolerant environment, it is observed from the figures that as soon as one of the instances goes over the set CPU threshold value or terminates, a new instance automatically spawns up within a couple of minutes. Thus, autoscaling service monitors the environment to make sure that the system is running at desired performance levels. So, when there is a spike in the network traffic autoscaling automatically increases the instances, so the load system reduces.

                                       Fig: 22 Network In before instancce fails

                                                  Fig: 23 Network Out before instancce fails

                          Fig: 24 CPU utilization before instancce fails

               Fig:25 Network paackets in before instancce fails

                          Fig: 26 Network packets out before instance fails

                   Fig: 27 Network in after  1st instance fails (blue line indicates new instance )

                     Fig:28 Network out after  1st instance fails (blue line indicates new instance )

Fig:29 Network Packets in after  1st instance fails (blue line indicates new instance )

Fig:30 Network Packets out after  1st instance fails (blue line indicates new instance )

Fig:31 CPU utilization  after 1st  instance fails (blue line indicates new instance )

The following charts show information about the devices from which CloudFront received requests for the selected distribution. The Devices charts are available only for web distributions that had activity during the specified period and that have not been deleted.

This chart shows the percentage of requests that CloudFront received from the most popular types of device. Valid values include:

  • Desktop
  • Mobile
  • Unknown:

                                                         Fig:32 Types of Devices

The chart shows parameters as a percent of all viewers request for created CloudFront:

  • Hits:  viewer request for which the object is served from a CloudFront edge cache.
  • Misses:  viewer request for which the object is not currently in a cache, so CloudFront must get the object.
  • Errors:  viewer request that resulted in an error, so CloudFront did not serve the object.

                                   Fig:33 Cache Results

4.      Conclusion

Fog computing is emerging as an attractive solution to the problem of data processing in the Internet of Things. Rather than outsourcing all operations to cloud, they also utilize devices on the edge of the network that have more processing power than the end devices and are closer to sensors, thus reducing latency and network congestion. Fog computing takes advantages of both edge and cloud computing while it benefits from edge devices’ proximity to the endpoints, it also leverages the on-demand scalability of cloud resources.

  1. Future Scope
    1.          Security Aspects
      1.    Authentication

In Fog computing Authentications plays an important role in the security aspects [9]. This paper proposes the main security issue of fog computing as the authentication at different levels of fog nodes. Traditional PKI-based authentication is not efficient and has poor scalability. This paper proposed a cheap, secure and user-friendly solution to the authentication problem in local ad-hoc wireless network, relying on a physical contact for pre-authentication in a location-limited channel [10]. Similarly, NFC can also be used to simplify the authentication procedure in the case of cloudlet [11].As the development of biometric validation in versatile processing and distributed computing, for example, unique mark verification, confront confirmation, contact based or keystroke-based verification, and so forth., it will be helpful to apply biometric based verification in Fog Computing.

5.1.2.      Privacy

Privacy is another vital part of security in Fog Computing. There is dependably a hazard that there might be a hole of imperative information when the correspondence between the clients and cloud is continuous. In this manner information security is of highest significance on the grounds that the fog hubs are close to end clients and they convey imperative data. There are a few algorithms which are being used which take care of the issue of data privacy to some degree. Techniques such as homomorphic encryption can be used to permit security safeguarding conglomeration at the local gateways without decoding [20]. Differential privacy [10] can be used to guarantee non-exposure of security of a self-assertive single section in the informational index if there should arise an occurrence of measurable questions.

  1. References

[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the internet of things. In workshop on Mobile cloud computing. ACM, 2012”

[2] S. Yi, Z. Hao, Z. Qin and Q. Li, “Fog Computing: Platform and Applications,” 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb)(HOTWEB), Washington DC, DC, USA, 2015, pp. 73-78. doi:10.1109/HotWeb.2015.22

[3] “https://www.rtinsights.com/what-is-fog-computing-open-consortium/

[4] “http://linkites.com/fog-computing-a-new-approach-of-internet-of-things/

[5] https://www.thbs.com/downloads/Cloud-Computing-Overview.pdf

[6] F. Bonomi et al., “Fog Computing: A Platform for Internet of Things and Analytics,” Big Data and Internet of Things: A Roadmap for Smart Environments, N. Bessis and C. Dobre, eds., Springer, 2014, pp. 169–186”

[7] Y. Cao et al., “FAST: A Fog Computing Assisted Distributed Analytics System to Monitor Fall for Stroke Mitigation,” Proc. 10th IEEE Int’l Conf. Networking, Architecture and Storage (NAS 15), 2015, pp. 2–11

[8] V. Stantchev et al., “Smart Items, Fog and Cloud Computing as Enablers of Servitization in Healthcare,” J. Sensors & Transducers, vol. 185, no. 2, 2015, pp. 121–128

[9] H. Gupta et al., iFogSim: A Toolkit for Modeling and Simulation of Resource Management Techniques in Internet of Things, Edge and Fog Computing Environments, tech. report CLOUDS-TR-2016-2, Cloud Computing and Distributed Systems Laboratory, Univ. of Melbourne, 2016; http:// cloudbus.org/tech_reports.html”

[10] I. Stojmenovic and S. Wen, “The Fog Computing Paradigm: Scenarios and Security Issues,” Proc. 2014 Federated Conf. Comp. Sci. and Info. Sys. (FedCSIS 14), 2014, pp. 1–8”

[11] Balfanz, D., Smetters, D.K., Stewart, P., Wong, H.C.: Talking to strangers: Authentication in ad-hoc wireless networks. In: NDSS (2002)”

[12] Bouzefrane, S., Mostefa, A.F.B., Houacine, F., Cagnon, H.: Cloudlets authentication in nfc-based mobile computing. In: MobileCloud. IEEE (2014)”

[13] Shin, S., Gu, G.: Cloudwatcher: Network security monitoring using openflow in dynamic cloud networks. In: ICNP. IEEE (2012)”

[14] McKeown, N., et al.: Openflow: enabling innovation in campus networks. ACM SIGCOMM CCR 38 (2008)”

[15] Klaedtke, F., Karame, G.O., Bifulco, R., Cui, H.: Access control for sdn controllers. In: HotSDN. vol. 14 (2014)”

[16] Yap, K.K., et al.: Separating authentication, access and accounting: A case study with openwifi. Open Networking Foundation, Tech. Rep (2011)

[17] Lu, R., et al.: Eppa: An efficient and privacy-preserving aggregation scheme for secure smart grid communications. TPDS 23 (2012)

[18] Dwork, C.: Differential privacy. In: Encyclopedia of Cryptography and Security. Springer (2011)


Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: