Like any kind of large-scale computing system deployment ever, the short answer to the question “what should my fog compute deployment look like” is going to be “it varies.” But since that’s not a particularly useful piece of information, Cisco principal engineer and systems architect Chuck Byers gave an overview on Wednesday at the 2018 Fog World Congress of the many variables, both technical and organizational, that go into the design, care and feeding of a fog computing setup.
Byers offered both general tips about the architecture of fog computing systems, as well as slightly deeper dives into the specific areas that all fog computing deployments will have to address, including different types of hardware, networking protocols, and security.
Compute options in fog settings
Computation in fog settings often has multiple processor types, so it’s a heterogeneous environment. RISC/CISC CPUs, as made by ARM/Intel, give great single-thread performance and a high degree of programmability. “They’re always going to have an important place in fog networks, and almost every fog node will have at least a couple of cores of that class of CPU,” Byers said.
They’re far from the only options, however. Field-programmable gate arrays can be helpful in use cases where custom datapaths are used to accelerate workloads, and GPUs – as seen most commonly in gaming systems, but also in increasing profusion in the high-performance computing world – are great at handling tasks that need a lot of parallel processing.
“Where a good RISC or CISC CPU may have a dozen cores, a big GPU may have a thousand cores,” he said. “And if your system and algorithms are amenable to parallel processing, GPUs are a very inexpensive and very power-efficient way to get lots and lots of bang for the buck.”
Finally, Tensor processing units, optimized to make machine learning and AI-based tasks easier, have obvious applications for applications that rely on that type of functionality.
Storage in fog computing
There’s a hierarchy of storage options for fog computing that runs from cheap but slow to fast and expensive. At the former end, that option is network-attached storage. A NAS offers huge storage volumes, particularly over a distributed network, but that means latency times measured in seconds or minutes. Rotating disks could work well for big media libraries or data archives, according to Byers, while providing