Network Devices: An Overview (2023)

network devices

Peng Zhang, enAdvanced industrial control technology, 2010

11.1.1 Summary

A hub (sometimes called a hub) is a network device that receives a data packet from a network node and forwards it to all other connected nodes. In its simplest form, a hub works by duplicating data packets received at its gateway and making them available to all other ports, allowing data to be shared between all devices connected to the hub. It also centralizes network traffic coming from multiple hosts and transmits the signal. Therefore, a hub needs enough ports to connect machines, usually 4, 8, 16 or 32 (Figure 11.2(A) shows some axes). Like a repeater, a hub operates at layer 1 of the OSI reference model, the physical layer, which is why it is sometimes called a multiport repeater.

Network Devices: An Overview (1)

Figure 11.2. Some hubs and switches used in networking: (A) from top to bottom are a 4-port hub, an 8-port hub, and a 16-port hub; (B) From top to bottom are a wireless controllable network switch, a typical small office network switch, and an Ethernet switch.

A switch (sometimes called a switching hub) refers to a networking device that filters and forwards data packets on a network. A switch is typically a multiport device (it can have 48 or more ports;Figure 11.2(B) shows some switches), which means that it is an active element that works at layer 2 of the OSI model. Unlike a standard hub, which simply replicates what it receives, a switching hub stores the Media Access Control (MAC) addresses of devices connected to it. When the switch receives a data packet, it forwards it directly to the receiving device looking for the MAC address. The switch analyzes the frames arriving on its input ports and filters the data to focus only on the correct ports; As a result, it can act as both a port when filtering and a hub when handling connections. A network switch can use the full performance potential of a network connection for each device and is therefore preferable to a standard hub.

Here are some discussions of how hubs and switches perform their functions in networks.

(1) A network hub or repeater is a very simple transmission device. They do not manage the traffic passing through them and all packets that enter any port are forwarded to all other ports except the gateway. When two or more nodes try to send packets at the same time, this is called a collision, and the network nodes must go through a routine to resolve the conflict. The procedure is dictated by the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet adapter has a receiver and a transmitter. If adapters didn't have to listen for collisions with their receivers, they could transmit data simultaneously with reception (full duplex). Since they must operate in half duplex mode (data flow in one direction at a time) and a hub relays data from one node to all nodes, the maximum bandwidth is shared by all nodes connected to the hub.

A core network behaves like a shared medium; h only one device can successfully transmit at a time, and each host is still responsible for collision detection and retransmission. Some hubs have special stacking ports so they can be combined to allow for more hubs than simply daisy chaining via Ethernet cables, but even then a large Fast Ethernet network will likely require switches to overcome the daisy chain limitations of hubs. It is possible to connect multiple hubs to centralize a larger number of machines; This is sometimes known as a daisy chain. To do this, the hubs are connected with crossover cables, a type of cable that connects inputs and outputs on one side with those on the other side. Hubs usually have a special port called an uplink to connect two hubs with a patch cable. There are also hubs that can automatically cross or uncross their ports depending on whether they are connected to a host or a hub.

(2) A network switch typically includes a set of input ports for receiving packets arriving on the buses, a set of output ports for forwarding packets on the buses, and a switching fabric, such as a crosspoint switch, for routing packets each other . inbound switch port to the outbound switch ports that should forward them. The network switch's input and output ports often include buffers to hold packets until they can be forwarded through the switch fabric or onto a network bus. Buffering an output port allows it to receive data faster than it forwards it, at least until the buffer is full. If the buffer is full, received data will be lost. A network switch port usually uses one or more SDRAMs to implement its memory buffer due to its low cost. Some incoming switch ports include protocol processors to convert each incoming packet into a sequence of uniformly sized cells. The input port stores the cells in its buffer until it can forward them through the switch fabric to one of thedeparture ports. Each outgoing switch port, in turn, stores the cells received by the switch fabric in its memory buffer and forwards them to another protocol processor, which reassembles them into a packet for forwarding to a data bus.

The traffic manager on a network switch forwards packets of the same stream in the order it receives them, but may preferentially forward packets with high priority. An outgoing switch port may contain a router to store cells received through the switch fabric in its buffer and then forward them to another protocol processor. The output port protocol processor reassembles each cell into a packet and forwards the packet over a network bus. A network switch output port can forward a packet to another network switch or to a network node on a channel selected from one of several different network buses, and the traffic manager of a network switch output port decodes the packet number stream identifier (FIN) to determine which output bus or bus channel should carry the packet out the port. The gateway router can also decode the FIN of a packet to determine a packet's priority and minimum and maximum forwarding rate. The network switch also includes an address translation system that maps a destination network address to an output port that can forward the packet to that network address. When a gateway receives an incoming packet, it stores the packet, reads its destination network address, consults the address translation system to determine which output port to forward the packet, and then sends a routing request to the network switching system.

Hubs and switches are used to divide networks into multiple subnets. For example, when a factory dynamically exchanges large amounts of data over the network, its traffic slows down the network for other users. To solve this problem, two switches can be used, connecting the computers of a plant to a network while the other computers connect to another. The two switches can then be connected to the router that sits between the internal network and the Internet. The floor's traffic is only seen by the computers on that network, but when they need to connect to a computer on the other network, the data is sent through the intermediate router.

Modernnetwork equipmentcombines multi-hub connectivity with selective routing of data packets from different protocol networks using bridging (cf.Section 11.3). Modern switches are also plug-and-play compatible. This means that switches can learn the unique addresses of devices connected to them, even if those devices are connected to a hub, which is connected to the switch, without programming. If a computer or industrial controller is connected directly to a switch, that switch would only allow traffic directed to that device. By controlling the flow of information between ports, switches gain major advantages over today's shared environments:

(A)

Having all devices connected directly to a switch port eliminates the possibility of port-to-port collisions. This ensures that packets arrive much more reliably than in a shared environment.

(B)

Each port has more available bandwidth. In a shared environment, any port on the system can consume all of the bandwidth at any given time. This means that during a traffic spike, the network availability of all other nodes is greatly reduced. However, in a full-port switched environment, only traffic between any node and the switch destined for or generated by that particular node flows over the cable.

In short, switches and hubs provide industrial users with much of the functionality that in the past could only be provided by separate proprietary control networks. The elimination ofCollisions connecting each node to a switched port, along with the hub's ability to prevent control and office traffic from inadvertently interacting while still using a physical network, allow industrial users to appreciate open architecture and enjoy high bandwidth. massive Ethernet speed without compromising the integrity of your control traffic.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781437778076100117

Circuits Collection, Volume V

Editor Richard Markell, editoranalog circuit design, 2013

introduction

With the explosive growth of datanetwork equipmentthe need arose to support many different serial protocols with a single port. The problem faced by interface designers is getting circuits for each serial protocol to share the same pins without introducing conflicts. The main source of frustration is that each serial protocol requires a different line termination that cannot be easily or cheaply changed.

With the introduction of the LTC1343 and LTC1344, a fully software selectable serial interface port is possible with an inexpensive DB-25 connector. The chips form a serial interface port that supports V.28 (RS232), V.35, V.36, RS449, EIA-530, EIA-530A or X.21 protocols in DTE or DCE mode and obedient NET1 and NET2 . The port is powered by a single 5V source and supports an echo clock and loopback configuration that helps eliminate tie-in logic between the serial driver and line transceivers.

A typical application is shown inFigure 38.29. Two LTC1343 and one LTC1344 interface ports with a DB-25 connector, shown here in DTE mode.

Network Devices: An Overview (2)

Figure 38.29. LTC1343/LTC1344 Typical Application

Each LTC1343 contains four drivers and four receivers, and the LTC1344 contains six switchable resistor cables. The first LTC1343 is connected to the clock and data lines along with the LL (local loopback) and TM (test mode) diagnostic signals. The second LTC1343 is connected to the control signal lines along with the RL diagnostic signal (remote loopback). Single-ended controller and receiver can be separated to support RI (Ring Indication) signal. The LTC1344's selectable line terminations connect to high-speed data and clock signals only. If the interface protocol is changed via the digital mode select pins (not shown), the drivers and receivers are automatically reconfigured and the appropriate line terminations are connected.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123978882000389

State of the art and practices to improve data storage energy efficiency

Marcos Dias de Assunção, Laurent Lefèvre, enadvances in computers, 2012

(Video) Hub, Switch, & Router Explained - What's the difference?

4.3.5 Consolidation in the storage and fabric layers

Consolidation of data storage andnetwork devicescan lead to significant savings in space and energy consumption. Some vendors argue that by deploying multiprotocol network devices, the network fabric can be consolidated onto fewer resources, reducing footprint, power consumption, and cooling requirements.9In addition, the increasing use of blade servers and the migration of virtual machines are driving the use of networked storage, allowing storage efficiency to be improved through consolidation.[35].

Storage consolidation is not a new topic. In fact, for several years, SANs have offered some level of storage consolidation and increased efficiency by allowing disk arrays to be shared across multiple servers on a private LAN and preventing data islands. Therefore, moving from DAS to networked storage systems offers several benefits that can increase energy efficiency. These benefits include[35]:

Shared Capacity:Administrators can improve storage utilization by pooling storage capacity and allocating it to servers as needed. Therefore, it helps to reduce memory islands caused by directly attached storage.

Storage Provisioning:Storage can be provisioned more granularly. Volumes can be provisioned in any quantity, rather than allocating physical capacity or entire disks to a specific server. Additionally, volumes can be resized as needed without server downtime.

Network start:This allows administrators to move not only server data to network storage, but also server boot images. Boot volumes can be created and accessed at boot time without the need for local storage on the server.

Improved management:Storage consolidation eliminates many of the individual tasks of backup, data recovery and software upgrades. These tasks can be performed centrally with a single set of tools.

Storage equipment manufacturers often provide multiple consolidated solutions under the unified storage banner. Traditionally, enterprise storage has used different storage systems for each storage function. A solution can be provided for online network attached storage,another for backup and archiving, while a third is used for secondary or near-line storage. These devices can use different technologies and protocols. With the goal of minimizing costs by reducing space and power requirements, unified storage solutions often support multiple protocols and provide transparent and consistent access to a pool of storage, regardless of the storage tier on which the data resides.[38](eg NetApps Data ONTAP, EMC Celerra unified storage platforms). Software systems are used to migrate data across different tiers of storage as per their reuse patterns.

data center architectures

C. De Cusatis, emOptical connections for data centers, 2017

1.4 Application architectures

Within the data center, an application architecture describes how functions are distributed across servers to provide end services to data center users. In the early 1960s, when computing power was expensive, application processing was done on a large centralized computer. Users interacted with this system through so-called "dumb terminals" (keyboards and screens without their own computational power). Data networks were limited to modems that communicated over the public telephone network between dumb terminals and large computers, usually at very low speeds (perhaps 10 to 56 kbit/s). With the development of microprocessors came the first personal computers in the mid-1970s. Although they could be configured to emulate a terminal connection to a larger computer, many new applications appeared that were compatible with desktop or laptop computers. As more and more data was distributed to individual users' computers, the need for an efficient file sharing mechanism quickly became apparent. Copying files to the hard disk, manually moving the hard disk to another computer and reinstalling the files has become very complicated for most users. This led to the development of Local Area Networks (LANs) in the late 1980s and early 1990s, which allowed computers to share files.

LANs enabled a new type of architecture called client-server computing, as shown inTissue. 1.5. The processing work can be split between the client PC and a larger centralized server. In general, client performance was lower than server performance for most applications, although steady improvements in client performance with each generation of technology have changed the types of applications that prefer to run on the server. Client-server architectures became widespread during and after the 1990s and are still used by many large companies today. In a traditional client-server design, the client can be a personal computer, while the server is a mainframe or enterprise-class computer. The server-centric approach offered all the benefits of centralized processing and control. from the dataCommunication passes through the central server and the implementation of policy-based security and control is simplified. Some enterprise architectures have tried to implement "thin clients" (essentially dumb terminals) to reduce costs. More recently, with the increasing demands of the telecommuting and mobile workforce and the decreasing cost of computing hardware, many users have needed the flexibility of a more powerful client device. In early client-server implementations, a more powerful client was often underutilized, wasting disk space, processing power, and bandwidth. The modern client-server model has evolved to take advantage of Internet connectivity for cloud computing or the application-as-a-service model. The role of clients has expanded to include smartphones and other mobile devices served by virtual machines in warehouse-scale cloud data centers around the world.

Network Devices: An Overview (3)

Figure 1.5. Client-Server Architecture.

An alternative design is the peer-to-peer system shown inTissue. 1.6. Servers work directly together as peers to handle all or part of the workload without the help of a central server. This is possible due to the greater processing power of low-cost commercial architecture servers that can be underutilized in a client-server design. Well-known examples of peer-to-peer architectures are the BitTorrent file-sharing service, Skype's Voice over IP system, and distributed processing applications including[Email protected] [21]. Similar designs are also being introduced in large corporate environments. In a file sharing program like BitTorrent, there is no concept of downloading files from a central server to a client. Instead, each computer hosts a client program and configuresFile Downloads When a computer requests a file, a group of computers containing all or part of the file is assembled, and multiple computers in that group simultaneously download different parts of the file. These downloads are simultaneous and parallel, and the file segments are reassembled on the destination computer to form the complete file. As more users share the same file, the download speed for that file gets faster because one computer can get smaller and smaller portions of the file from each computer in the group. Another popular example of a peer-to-peer architecture (with one exception, which is technically a centralized server) is Skype's VoIP system, which offers free or low-cost Internet calling. This design is actually a hybrid in that it requires a central login server where users authenticate to the system. All other Skype operations are done using a peer-to-peer design. To find the name and address of someone you want to call, Skype uses a directory search process. Compute nodes on the Skype network can be promoted to "supernodes" if they have enough memory, bandwidth, and processor power. Skype directory lookup (like other signaling functions) is a peer-to-peer process running on supernodes. Call data is carried by routing voice packets between the two host computers at either end of the call.

Network Devices: An Overview (4)

Figure 1.6. Point-to-point architecture.

In addition to hardware architecture, data centers also use various software architectures. A detailed discussion of software architecture is beyond the scope of this chapter, although we provide some examples for good measure. Modern cloud computing systems can utilize a service-oriented architecture, which is actually an application development methodology that creates solutions through the integration of one or more web services.[3]. Each web service is treated as a subroutine or function call dedicated to performing a specific task (for example, processing a credit card or checking airline departure times). These Web services are treated as remote procedure calls or application program interfaces (APIs). Cloud computing environments can build software-as-a-service or platform-as-a-service offerings based on these approaches. Software architectures can include agile development methodologies, specifically the collaborative approach between developers, IT professionals and QA known as DevOps. These approaches emphasize principles such as fail fast and rely on other continuous improvement development processes such as the Deming Cycle (also known as Plan, Do, Check, Act or PDCA).[22].

The most common general-purpose enterprise data center application architecture is a multi-tier model, as shown inTissue. 1.7. Based on multi-tier designs that support enterprise resource planning and content resource management solutions, this design includes tiers of servers that host web systems, applications, and databases. Multi-tier server farms provide greater resiliency by running redundant processes on separate servers within the same application tier. This allows shutting down a server without stopping the process from running. Resiliency is also promoted by load balancing between layers. Server virtualization allows web and application servers to be deployed as virtual machines on a common physical server (as long as it meets resiliency objectives). Traditional database servers use separate physical machines for performance reasons rather than using virtualization.Subject. Data center layers can be separated from each other by using different physical routers at each layer or by implementing virtual local area networks (VLANs). Firewalls and VLAN-enabled load balancers are also often used between layers. Physically separate networks can get better performance in exchange for higher deployment costs and more devices to manage. The main benefits of VLANs are reduced complexity and cost. System performance requirements and traffic patterns often help determine whether physical or virtual network segmentation is preferred for a particular project.

Network Devices: An Overview (5)

Figure 1.7. Traditional data center red architecture.

Those ones: From the Fiber Optic Data Communications Handbook, Chapter 1, Figure 1.1. According to the InfiniBand Trade Association [online],http://www.infinibandta.org/content/pages.php?pg=technology_overview[accessed on 01/29/13].

The multi-tier approach creates pools of servers, storage, andnetwork equipmentIt is used for high availability, load balancing, and enhanced security. Resource pooling is a general principle that can be applied to other data processing applications. For example, there is a specific class of high-performance computing (HPC) applications that combine multiple processors to form a unified high-performance system using specialized software and high-speed networking. Examples of HPC clusters can be found in scientific and technical research (including meteorology, seismology, and aerodynamics), real-time financial business analysis, high-resolution graphics, and many other areas. HPCs arethey are available in many different types and use standard hardware and custom processors. There are three main categories of HPC that are widely recognized by the industry:

HPC Type I (Parallel Message Passing or Fully Coupled): Applications run concurrently and in parallel on all compute nodes, while a master node determines the workload assignments for each compute node.

Type 2 HPC (Distributed I/O Processing or Poll Engines): Fast response to client requests is achieved by distributing the requests among the master nodes and then distributing them to many compute nodes for parallel processing (the host systems). Current unicasts are gradually being replaced by multicast).

HPC type 3 (parallel or loosely coupled file processing): data files are divided into segments and distributed across a group of servers for parallel processing; Partial results are recombined later.

These clusters can be large or small (up to 1000 servers) and organized into subgroups with interprocessor communication between the subgroups. An updated list of the world's top 500 supercomputers[23]It is maintained to provide an overview of the server, storage, and network connections currently in use. Most cluster networks are based on variations of Ethernet, although other proprietary network structures are used. Topologies can include variations of a hypercube, a torus, a hypertree, and full or partial meshes (to provide the same latency and shortest paths to all compute nodes). HPC networks can include four-way or eight-way equal cost multipathing (ECMP) and hash-based distributed forwarding of Layer 3 and Layer 4 ports.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081005125000012

Optical amplifiers for next-generation WDM networks: perspective and overview

Atul Srivastava, John Zyskind, umOptically Enhanced WDM Network, 2011

1.4.12Chapter 13. Inexpensive optical amplifiers (Bruce Nyman and Greg Cowle)

Reducing the cost of optical amplifiers is becoming more and more important due to constant pressure on the price of optical amplifiers.network equipment, due to changes in network architectures and applications that are extending the reach of repeaters beyond the mainline line booster repeaters, and because the dominant EDFA technology is not as cost-saving through integration as other technologies such as semiconductors.

This chapter examines the issues associated with reducing the cost of optical amplifiers, with a focus on single-stage optical amplifiers as they are widely used in the highest volume and most cost sensitive applications such as: Channel gain for high speed channels with advanced modulation format, cable television distribution (CATV) amplifiers, and ASE sources for WDM Passive Optical Networks (PONs). Alternative technologies for low-cost amplifiers are discussed, such as semiconductor optical amplifiers and erbium-doped waveguide amplifiers (EDWA). EDFAs, which are the dominant technology, are composed of several components with different characteristics and are based on different technologies. Challenges and opportunities for reducing the cost of key components of EDFAs and the cost of labor to assemble EDFAs are discussed. EDWAs offer cost-saving opportunities by integrating the capabilities of many of the components required for optical amplifiers. However, the lower power-to-pump signal conversion efficiency in erbium-doped planar waveguides compared to erbium-doped fibers poses an obstacle to commercial realization of the potential cost advantages of EDWAs. A more recent approach is the PLC erbium-doped fiber amplifier, where many of the passive devices they are based on are integrated into a PLC, but the gain is provided by an erbium-doped fiber. This approach combines the cost benefits of PLC integration with theperformance and pumping efficiency of erbium-doped fibers and is particularly advantageous for complex amplifier architectures that require many optical components.

read the whole chapter

URL:

(Video) Network Devices - Overview

https://www.sciencedirect.com/science/article/pii/B9780123749659100019

Networks

Colin Walls, enIntegrated Software (Second Edition), 2012

8.2 Who needs a web server?

It is increasingly common for web servers, or rather HTTP servers, to be integrated into routers, gateways and others.network equipment. There are many reasons to consider this for a variety of other types of embedded systems. In a 2000 contribution to the Nucleus Reactor newsletter, Neil Henderson (CEO of Accelerated Technology) gave a very good overview of what's possible and how to get started. This was the basis of this article. It might be interesting to compare the use of HTTP versus SNMP, as their goals are quite similar in this context. See the article "Introduction to SNMP" later in this chapter.

CW

8.2.1 Introduction

You can look at web servers the same way I did before I understood what they could do in an embedded system. In my opinion, web servers were on machines with lots of disk space, serving pages to web browsers. Well, big hard disks aren't necessary and web servers can do a lot more than just serving up web pages.

Of course, with an embedded web server, you can serve pages. But did you know that you can use a web server to provide an interactive user interface for your embedded system? Did you know that you can program this interface once and use it regardless of your user's machine type? Plus, did you know that you can monitor and control your embedded system from any web browser with very little programming? All these things are made possible by this very small, very efficient and very powerful piece of software.

In this document, I will provide information that you will likely be able to use in your embedded system. All you need is a TCP/IP networking stack, a built-in HTTP server (ie a web server), and a little imagination. So let's start.

8.2.2 Three Main Skills

Web servers can perform three basic functions:

Deliver webpages to a web browser

Monitor the device they are embedded in

Control the device they are built into.

We will examine these functions in more detail in the remainder of this chapter. Here, I'll give a brief introduction to each of these features so you can better understand the sections that follow.

Deliver webpages to a web browser

This is the most basic capability of a web server. The web server waits on the network for a web browser to connect. Once connected, the web browser gives the web server a filename and the web server downloads that page to the web browser.

In its simplest form, the web server can download plain HTML files (simply because it lacks capabilities other than displaying information) from its file system to the web browser. This feature is ideal for downloading embedded system user documentation for use in a web browser.

A more sophisticated and extremely powerful feature is for the web server to download Java programs or applets (encapsulated in an HTML file) to the web browser. Once loaded into the web browser, the Java program or applet runs and can communicate with the target (containing the web server) using the TCP/IP protocol. The strength of this skill lies in the skill:

Support for legacy applications (existing TCP/IP applications that currently communicate with a Java application running in a browser, rather than writing proprietary applications for various desktop operating systems).

Write sophisticated TCP/IP-based applications between a host and a server, where you control both sides, regardless of where the host is running.

monitor a device

There is often a need to retrieve (ie monitor) information about the performance of an embedded system. Monitoring can range from determining the current pixel resolution of a digital camera to receiving vital signs from a medical device.

Embedding certain commands in an HTML page allows dynamic information to be inserted into the HTML stream that is sent to the web browser. As the web server retrieves the file from the file system, it searches the text for specific comments. These comments specify the functions to be performed on the target. These functions format the dynamic information into HTML text and include the text in the HTML stream that is sent to the web browser.

control a device

HTML has the ability to manage "forms". If you've ever browsed the web and tried to download something, chances are you've seen a form. A form is a collection of "widgets" such as text entry fields, radio buttons, and single-action buttons that can be assembled to collect virtually any type of data.

By creating an HTML page with a group of widgets, information can be collected from the user in a web browser. This information can then be transmitted to the target and used to adjust or change its behavior. For example, an HTML page can be created to configure a robotic arm to move in specific sequences to perform a required function (for example, bending a sheet metal part). This can be done by placing specific text input fields on the HTML page that instruct the user to enter various specific data elements. After being sent to the web server, the embedded system application can analyze the data points, validate them, and then execute them (or, if the data is invalid, have the user enter them again) to move the arm. robotic. in the right directions.

8.2.3 Running Web Server

Once again, we'll look at using the web server based on the top three features. We'll look at the processing that takes place on the web server and how information is delivered to and from the web browser. I will discuss the possibility of serving pages to a web browser. Then we'll move on to the more complex tasks that can be accomplished by implementing an embedded web server, that is, using the web server to provide dynamic information to a web browser and using a web server to control your embedded system.

Communication between the web server and the web browser is handled via HTTP (Hyper Text Transfer Protocol). HTTP provides the rules for coordinating page requests from the web browser to the web server and vice versa. The pages are transmitted in HTML (HyperText Markup Language) format.

provide pages

As discussed above, the simplest use of the web server is to serve HTML pages from the web server to the web browser. This is a simple operation where the server maintains a directory structure that contains multiple files. The user specifies in the web browser the URL (Uniform Resource Locator) that contains the IP address of the web server and the name of the file to be retrieved. The web browser sends an HTTP packet with the requested file name to the web server. The web server finds the file and sends it to the browser using the HTTP protocol. Finally, the web browser displays the page for the user.

This feature can be used to provide information such as the device's user manual from the embedded system to the user in a web browser. With most web server implementations, the ability to serve pages in a web browser can be built into an embedded system with little or no programming.

Use of the web server to provide dynamic information to a web browser

By manipulating the HTML page sent to the web browser, the embedded system used by the web server can provide dynamic information to the user. The web server in the embedded device checks each HTML file sent to the web browser. If a specific string of characters is found during the verification process, the web server knows it needs to call a function within the embedded system. The called function knows how to format the dynamic information in HTML and add it to the buffer that is sent to the web browser.

For example, suppose our embedded system is a router. Suppose further that we want to display the IP address of the router. The complete HTML file to display this information might look like this:

<BODY>The IP address of the router is: <!-# IPADDR> </BODY>

When the web server scans this HTML code, it finds the<!-#symbol, performs a search for the stringIPADDR, and determines that a functiondisplay_IP_addr(Token *env, Anforderung *req)is call.

show_ip_address()it could be something like this:

/* Create a temporary buffer. */

char ubuf[600];

void display_IP_addr(Token *env, Anforderung *req)

{

unsigned character *p;

/* Get the IP address. */

p = req->ip;

/* Convert the IP address to a string and put it in ubuf. */

sprintf(ubuf, "%d.%d.%d.%d", p[0], p[1], p[2], p[3]);

/* Store the IP string in the path

the browser */

ps_net_write(req, ubuf, (strlen(ubuf)), PLUGINDATA);

}

Let's briefly review what we just did. In the HTML file, we indicate that we want to display the string "The IP address of the router". In addition, there is a command to display the value ofIPADDR. It's not obvious from what we're seeing here, but theIPADDRThe reference is actually to a table on the target. On the table,IPADDRhas a corresponding element calleddisplay_IP_adrthis is a pointer to the function call with the same name.

In the code, we assume that the web server has already found them.<!-#>chain and located theIPADDRitem in the table. That's where the call came from.show_ip_address().

show_ip_address()just get the IP address of thenecessarystructure, formats it into the easily recognizable four-part IP number, and then buffers the resulting string on the way to the web browser.

Using this simple example, we can begin to see the power that the web server has in transmitting dynamic information to a web browser. Using more sophisticated HTML information, elaborate user advertisements can be created that are both attractive and informative.

Using the web server to control an embedded system

For years, developers of network-enabled products (eg, printers, routers, bridges) have had to develop various programs to remotely configure these devices. Since the products can be used in many cases on Windows, Mac OS and Linux operating systems, developers of these systems are forced to write applications for all three platforms. Using a web server can reduce this programming effort to develop one or more HTML pages and write code for that purpose. Using this paradigm, users of printers, routers, bridges, etc. they simply connect to the device using a web browser. I recently purchased a SOHO router that had this feature. An IP address was specified in the documentation provided with the router. I used that IP address in my web browser to communicate with the router's web server. It provided a full screen of options for configuring the router for my specific circumstances. Let's look at a simple example of how something similar can be done with an HTML file and a bit of code.

The HTML file looks like this:

<BODY> Use DHCP to get IP address? </BODY>

<br>

<br>

(Video) CIT1304: INTRODUCTION TO NETWORKING DEVICES

<INPUT TYPE="RADIO" NAME="RADIOB" VALUE="SIM" VERIFICADO>SIM

<br>

<INPUT TYPE="RADIO" NAME="RADIOB" VALUE="NO">NEIN

<br>

<br>

<INPUT TYPE="SEND" VALUE="SEND">

The code used to process this request might look like this:

int use_DHCP_flag;

int use_DHCP(token *env, request *req)

{

/* Check if we see the correct "command" */

if(strcmp(req->pg_args->arg.nombre,"RADIOB")== 0)

/* Should we use DHCP? */

if(strncmp(req->pg_args->arg.value,"YES",3) == 0)

/* Yes, use DHCP */

use_DHCP_flag = TRUE;

}

Once again, we'll review the elements we've just illustrated. First, let's see the HTML code. We have three sections in this file separated by two newlines. The first section is simply a notice to the user. The second section is the code needed to display the radio buttons as shown in the browser screen shown above. The third section serves two functions. First, determine the submit button layout. Second, it causes the browser to send this screen's information to the web server as soon as it is clicked. For our discussion, the format of the data in the packet sent from the web browser to the web server is not necessary. However, as you can see in the function aboveuse_DHCP(), the information is simply fed to a function that can be executed at the user's request; in this case, have the router use DHCP to obtain its IP address.

8.2.4 Brief summary of web server capabilities

We examined three different web server capabilities: serving HTML pages to a web browser, sending HTML files containing dynamic information to a web browser, and using a web browser to command or control an embedded system. The examples and explanations of these functions are straightforward. However, its use is unlimited!

I've given presentations to hundreds of people about the benefits of an embedded web server. In these presentations, I always emphasize the importance of imagination when using this software. For about 20,000 code and a little effort, you can build systems that have thisSophisticated user interfaces that allow your users to understand, use and control your embedded system.

What has been discussed so far in this document are the basic features of the web server. In the next section, we'll look at some additional features that a specific Web server implementation may or may not have.

8.2.5 What else should you consider?

If you continue to learn and use Embedded Web Servers, you will find that commercial package providers vary in their offerings. Some things to keep in mind are:

authentication (security)

Utilities for embedding HTML files

file compression

File upload functions.

HTTP 1.0 provides basic network authentication. If you've ever tried to access a webpage and received a dialog asking you to enter your network ID and password, you've seen this feature used. You should verify that the package provides the ability to add and remove users from the username/password database on the web server. In some cases, you must add code to do this.

In general, most embedded Web servers use a very simple file system that resides in memory. Vendors that support this should also provide support for creating this file system on your desktop so it can be ROM or Flash mounted on your target. In addition, the vendor must also provide the ability to use a more resourceful file system capable of handling the myriad offline storage capacities available to embedded systems.

If a vendor supports creating an in-memory file system (files contained in ROM or Flash), as mentioned, it must also provide file compression capability. HTML files can grow and take up a lot of space on a file system in memory. The compression function must be able to compress the files when creating the in-memory file system and decompress the file when requested by the web server.

HTML 3.2 allows files to be uploaded from the web browser host machine to the web server. A vendor that provides a reasonable implementation of a web server should also provide the ability to support this feature.

8.2.6 Conclusion

Web servers will continue to proliferate in embedded systems. The possibilities offered by this technology are as vast as the imagination of embedded developers like you. This is aTechnology that can be used to create sophisticated user interfaces for embedded systems, maintain a local repository for user documentation, allow users of the embedded system to control it, and much more.

As the Internet is ubiquitous, the number of embedded devices connected to it will increase. We currently know of no better way to monitor and control an embedded system than a web server. Hope you are as excited about this technology as I am. Above all, I hope you can get the most out of it on your system.

This document was written primarily to introduce this technology and give you an idea of ​​how beneficial it can be. We hope it has encouraged you to get your creative juices flowing and find a way to use this amazing technology.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158221000088

Cloud-based approach to data centers

Emcloud control systems, 2020

14.2.1 Server Level Control

At the server level, many control variables are available for IT, power and cooling management. "Server" in this case means all IT equipment, including computer servers, storage drives andnetwork equipment. Compute resources such as B. CPU cycles, memory capacity, memory access bandwidth, and network bandwidth are all local resources that can be dynamically adjusted, especially in a virtualized environment. Power control can be done on either the demand side or the supply side, even at the server level. Server power consumption can be controlled through active management of the workload hosted on the server, such as admission control, load balancing, and workload migration or consolidation. On the other hand, power consumption can be adjusted through physical control variables such as dynamic voltage and frequency scaling (DVFS) and through on/off state control.[515],[516],[517],[518],[519],[520],[521],[522],[523]. DVFS is already implemented in many operating systems, for example the "CPU governors" in Linux systems. CPU usage is normally controlled by the DVFS controller, which adjusts power consumption to match the fluctuating workload.

Previous work has focused on how to handle the tradeoff between power consumption and IT performance. Varma et al.[524]discuss a control theory approach to DVFS. Cho and others.[515]Discuss a control algorithm that varies the clock speed of a microprocessor and the clock speed of memory. Leverich et al.[525]propose a control approachto reduce static power consumption of chips in a server through dynamic per-core wake-up power control.

Server-level cooling control is usually implemented by actively tuning the server fan to cool the servers.[526]. Similar to power management, the thermal state of servers (for example, processor temperatures) can also be affected by active load or performance control. As an example, Mutapcic et al.[520]focus on maximizing the processing capabilities of a multicore processor subject to specific thermal constraints. In another example, Cohen et al.[519]propose strategies to control the power consumption of a processor via DVFS to impose certain restrictions on chip temperature and workload execution.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128187012000226

Software Defined Networking and OpenFlow

Saurav Das, ... Rob SherwoodFiber Optic Data Communications Handbook (Fourth Edition), 2013

17.5 Application example: WAN TE

All Tier 1 and many Tier 2 Internet Service Providers (ISPs) use some form of TE in their WAN infrastructures today. Providing greater determinism and better utilization of network resources are the main goals of TE, and MPLS TE networks are the preferred solution, mainly because normal IP networks cannot provide the same service and older methods of providing ATM TE networks or Frame Relay are no longer used.

(Video) IT Support Engineering| LAN|WAN | Network| Network Devices

But MPLS TE networks are expensive and complex; and do not provide network operators with the level of determinism, optimization and flexibility they require[14]. Note that in MPLS-TE, the reserved bandwidth of a tunnel is generally an estimate made by the network operator of the tunnel's potential usage. But traffic matrices change over time in unpredictable ways. Therefore, the reservation of a given tunnel can differ greatly from its actual usage at a given time, resulting in an unoptimized network.

Router manufacturers try to get around this problem with mechanisms like automatic bandwidth, but at best it's local throttling. Each router only knows the tunnels it creates and, to some extent, the tunnels that pass through it. For all other tunnels, the router only knows the aggregated bandwidth reserved by those tunnels on the links. In other words, although the router creates a map that provides the global state of the TE link, it only has a local view of the state of the tunnel (or TE-LSP state). As a result, local optimizations performed by various decision makers (tunnel headers) result in significant changes in the network.

Another option is to use a PCE. PCE can optimize all tunnels globally as it has a complete view of the tunnel and connection status. But PCE is an offline tool. HePCE calculation results are difficult to implement in live networks. Headend routers must be locked down individually and CLI scripts must be executed with care to avoid misconfigurations. This process is cumbersome enough to try less often (i.e. once a month).

With SDN and OpenFlow, we can get the best of both approaches by bringing the PCE tool "inline". We take advantage of the global optimization that the PCE tool provides, and then have the results of those optimizations directly and dynamically update the forwarding state (as routers can do). The net effect is that due to frequent (perhaps daily or hourly) throttling, the network operator can run a network with a much higher load without modifying the network and the operational problems of CLI scripts due to the programmatic interface of the platform. SDN TE and online. PCE -Application.

17.5.1 WAN baseada no Google OpenFlow

At the time of this writing, perhaps the best-known implementation of OpenFlow and SDN in a production network is Google's centralized TE implementation in its data center WAN.

In terms of traffic volume, Google's networks are the size of the largest network operators in the world.[14]. Google's WAN infrastructure is organized into two main networks. One is the I-Scale network, which connects to the Internet and carries user traffic (such as search and Gmail) to and from their data centers. The other is the G-Scale network, which carries traffic between its global data centers. The G-Scale network runs 100% on OpenFlow.

Google built its own switches for the G-Scale network. They are designed to provide the minimum support needed for this solution, including support for OpenFlow and the ability to share terabit bandwidth between sites. By deploying a group of these switches and a group of OpenFlow controllers at each site, they created a WAN "fabrication" on which they implemented a centralized TE service. The TE service (or application) collects real-time resource and network status information and interacts with applications at the edge that request network resources. By knowing both the demand and a global view of the supply, it can optimally calculate routes for incoming traffic flows and have the results of these calculations programmed into the "structure" of the WAN via OpenFlow.

Based on its experience with production deployments, Google cites several benefits of using SDN over the WAN.[9,14]. As mentioned in the introduction, almost all of the benefits can be broken down into the three main benefits that SDN brings to any network:

Easier control with greater flexibility:

Dynamic TE with a global perspective allowed them to operate their "hot" networks at a highly efficient (and previously unrivaled) 95% utilization rate. Typical carrier WANs typically run at 30% utilization.

Faster convergence to optimal target after network failures. This is directly due to the greater determinism and tighter control that OpenFlow/SDN offers compared to traditional distributed protocols.

SDN allowed them to offload all control logic to high-performance external servers with more powerful CPUs, rather than relying on less powerful onboard CPUs.network equipment. These systems can also be upgraded more easily and independently of the data plane's forwarding equipment.

Lower total cost of ownership (TCO):

Traditionally, the cost per bit of CapEx should decrease as networks scale, but in reality this is not the case. With SDN, Google was able to separate the controls from the hardware and optimize them separately; They could choose hardware based on the features they needed (and no more) and build software based on service requirements (TEs) (rather than the distributed protocol requirements in traditional network control planes), which reduces costs. capital costs.

By separating management, monitoring and operations from the network elements, operational costs can be reduced by managing the WAN as a single network.Systemrather than a collection of individual boxes.

Speed ​​of innovation and shorter time to market:

Once their backbone was ready for OpenFlow, it took just 2 months to deploy a production-quality centralized TE solution.

Google may update the software and add new features "without interruption". h no packet loss or capacity penalties, as in most cases the functions don't "touch" the switch; they are handled entirely in the decoupled control plane.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124016736000179

energy efficient telecommunications

Daniel C. Kilper, Rodney S. Tucker, enFiber Optic Telecommunications (Sixth Edition), 2013

17.6 Conclusion

All six parts of this book series have highlighted the importance of fiber optic technologies in advanced telecommunications systems. Combined with advances in electronic switching and signal processing, as well as advanced protocols and network management systems, today's telecommunications network has achieved high levels of capacity, range, reliability, flexibility and affordability for users. For many years, capacity and cost considerations were the main driving forces behind advances in telecommunications. But recently, concerns about rising power consumption of telecom networks have brought energy efficiency into the mix of device vendors and network operators.

We identified several reasons for this recent increase in interest in energy efficiency. first hownetwork equipmentAs the capacity of optical transceivers and network switches and routers increases, the density of active devices must increase to maintain an acceptably small equipment size. This increasing density of equipment has created challenges related to heat dissipation from equipment racks. Improved thermal engineering can help alleviate some of these issues, but ultimately it is necessary to improve the energy efficiency of active devices. Second, the cost of ownership (OpEx) associated with device power consumption is becoming an increasingly important part of overall OpEx for network operators. In the past, energy costs were such a small part of a carrier's total cost of ownership that many operators paid little or no attention to energy. But that is now rapidly changing. Third, network energy consumption has a small but growing impact on global greenhouse gas emissions as the telecom network continues to expand to meet growing demand for new services and applications and to adapt. to a growing user base.

We also provide a detailed analysis of the energy consumption of the various major elements of a telecommunications system, including switching and transport. Fundamental power ratios were used to describe a lower bound for a minimum grid power consumption, based on technologies that are practical today and expected for the next decade. This result was discussed in terms of grid energy consumption estimates based on technology projections for commercial systems. The four orders of magnitude that separate these trends depend not only on the efficiency of the technology, but also on the many functional and performance requirements of today's business systems. Advances in energy-efficient telecommunications therefore require a combination of technological improvements with new, intelligent, service-aware features that can achieve significant performance or functionality with lower power consumption.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123969606000171

Role of blockchain technology in IoT applications

Aafaf Ouaddah, enadvances in computers, 2019

2.3 Summary

Centralized and decentralized approaches with trusted entities, where all devices are identified, authenticated and connected through cloud-based servers, support enormous processing and storage capacities. While this model has been used to connect standard computing devices for decades, it will continue to serve small IoT networks.[32], is struggling to keep up with the growing demands of the vast IoT ecosystems of the future[33–35]for the following reasons.

Cost – Existing IoT solutions are expensive for two main reasons: (1) high maintenance costs – centralized vendor clouds, large server farms, andnetwork equipmenthave high maintenance costs, considering software updates are released on millions of devices years after they have been discontinued[36]. (2) High infrastructure costs: The large volume of communications that must be handled when there are tens of billions of IoT devices must deal with a very high volume of messages (communication costs) and data generated by the devices (storage costs) and analysis procedures (server costs).

Bottleneck and single point of failure – Servers and cloud farms remain a bottleneck and single point of failure that can bring the entire network down. This is especially important when connecting directly to critical IoT services such as healthcare.

Scalability: Within the centralized paradigm, cloud-based IoT application platforms collect information from entities located in data collection networks and provide raw data and services to other entities. These application platforms control the receipt of the entire flow of information. This application creates a bottleneck when scaling IoT solutions to the exponentially growing number of devices and the amount of data generated and processed by those devices (i.e. the concept of "big data").

Inadequate security: The massive amount of data collected from millions of devices is raising information security and privacy concerns for individuals, businesses and governments. As evidenced by recent denial of service attacks on IoT devices[37], the large number of cheap and insecure devices connected to the Internet is proving to be a significant challenge to ensuring IoT security.

Data breaches and lack of transparency: In centralized models, consumers certainly don't trust service providers who gain access to data collected from billions of information creators. There is a need for a "security through transparency" approach that allows users to remain anonymous in this over-connected world.

read the whole chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065245818300676

Videos

1. Part 1. Introduction to network Device and Configuration.
( Ri Learn)
2. Networking Devices : An Overview
(HOD CSIT SIRT)
3. Network Devices - CompTIA A+ 220-1001 - 2.2
(Professor Messer)
4. 15 years of Panasonic PTZ Cameras!
(Panasonic Connect Europe)
5. Introduction to networking devices
(A learning Platform)
6. johann: Network Device Monitoring | Introduction
(Technology with Flo)

References

Top Articles
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated: 08/27/2023

Views: 6154

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.