Our StatelessNF processing instances are architected around efficient pipelines utilizing DPDK for high performance network I/O, packaged as Docker containers for easy deployment, and a data store interface optimized based on the expected request patterns to efficiently access a RAMCloud based data store. In breaking the tight coupling, we enable a more elastic and resilient network function infrastructure. In this paper we present Stateless Network Functions, a new architecture for network functions virtualization, where we decouple the existing design of network functions into a stateless processing component along with a data store layer. Results show that ThunderSecure can recognize science data traffic captured long after the training and made nearly certain detection on the segment of the streams where anomalous flows were injected. The detection performance was evaluated on traffic captured from the same research network days and weeks after the training with different types of attack flows injected. We trained ThunderSecure on hundreds of billions of science data packets mirrored from two 100G network connections at Fermi National Accelerator Laboratory. Testing traffic deviated from the learned profile will be marked as anomalies. A baseline of normal distribution will be created based on the training observation. It extracts statistical and temporal features from real-time network data streams and feeds them to a one-class anomaly detection network. ThunderSecure implements an efficient packet processing and detection pipeline using multi-cores and GPUs. We present ThunderSecure, a high-throughput, unsupervised learning-based intrusions detection system for 100G research networks. However, anomaly patterns are difficult to define and that rulesets are often not updated frequently enough to reflect the changes of attack behaviors. Moreover, traditional network intrusion detection systems (NIDS) are signature based. However, monitoring anomalies in such high-speed network traffics is challenging given current cyber-infrastructure. Like general purpose networks, research networks experience intrusions. These data are transferred through dedicated high-bandwidth networks (40/100G) across distributed sites for processing, storage, and analysis. Nowadays, data generated by large-scale scientific experiments are on the scale of petabytes per month. We evaluate our prototype implementation across two geographically distributed SDMZ sites with SDN-based case studies, and present performance measurements that respectively highlight the utility of our framework and demonstrate efficient implementation of security policies across distributed SDMZ networks. We also developed tag and IP-based security microservices that incur minimal overheads in enforcing security to data flows exchanged across geographically-distributed SDMZ sites. To address these challenges, we develop a fine-grained dataflow-based security enforcement system, called CoordiNetZ (CNZ), that provides coordinated situational awareness, i.e., the use of context-aware tagging for policy enforcement using the dynamic contextual information derived from hosts and network elements. Critical security challenges faced by these networks include: (i) network monitoring at high bandwidths, (ii) reconciling site-specific policies with project-level policies for conflict-free policy enforcement, (iii) dealing with geographically-distributed datasets with varying levels of sensitivity, and (iv) dynamically enforcing appropriate security rules. The Science DMZ (SDMZ) is a special purpose network architecture proposed by ESnet (Energy Sciences Network) to facilitate distributed science experimentation on terabyte- (or petabyte-) scale data, exchanged over ultra-high bandwidth WAN links.
0 Comments
Leave a Reply. |