2024
card title
2024/12/17
Blogs
NEXCOM

FWA Over 5G Explained: The Role of 5G uCPE

The Trend 5G technology has been launched at an astounding pace and is continuously accelerating in its development, enhancing the functionality and performance of FWA (Fixed Wireless Access) application. Initially, FWA was a means to replace economically unviable wired networks for last-mile connectivity in rural and remote areas. Empowered by 5G and benefiting from increased bandwidth, complete connectivity, and rapid, flexible deployment, FWA has branched out further into various vertical markets. This recent advancement in 5G FWA technology has set up an arena for players from all over the world to compete for substantial business opportunities.   According to a June 2023 report by Ericsson, it is projected that by 2028, there will be over two hundred million 5G FWA users, constituting 17% of fixed network connections. The report also notes that there are already over 100 telecommunications companies worldwide offering 5G FWA application services. In the context of global efforts to bridge the digital divide, 5G FWA has become a crucial component in achieving nationwide broadband connectivity.   Currently, the primary application of 5G FWA is in public network scenarios where wireless transmission is used to reach the last mile. However, with the completion of the 3GPP Release 17 standardization, 5G applications are becoming more comprehensive. In addition to the fundamental functions of 5G, such as eMBB (Enhanced Mobile Broadband) in both FR1 and FR2 frequency ranges, URLLC (Ultra-Reliable Low Latency Communication), and mMTC (massive Machine Type Communication), advanced features like 5G network slicing, 5G TSN (Time-Sensitive Networking), 5G security, and NTN (non-terrestrial networks) enable 5G FWA technology to be used as 5G private network in various settings. These settings include smart factories, smart manufacturing, smart cities, and intelligent transportation (5G-V2X), etc.   The Challenge The widespread adoption of 5G FWA across various sectors and situations underscores the importance of comprehending the unique requirements of each application in order to identify the most suitable equipment.   For service providers currently evaluating different options, it's advisable to take into account the following factors: the reliability of equipment for managing traffic, meeting critical low-latency demands, the necessity for mobility and outdoor wide-area connectivity, and a comprehensive, future-proof solution that caters to both present and future requirements.   For different field applications, 5G FWA can essentially be categorized into four attribute grades: Consumer Grade, Enterprise Grade, Industrial Grade, and Telecom Grade. Different grades of 5G FWA focus on different features and functions, allowing various usage scenarios to better showcase the advantages of 5G FWA. The following TABLE I illustrates the characteristics of different grades of 5G FWA.   TABLE I5G FWA GRADES AND THEIR ATTRIBUTES AttributeGrade Bandwidth Performance Computing (AI) Latency Reliability Slicing Security PoE LAN IP Code Consumer ★ ★★ ★ ★★★ ★ ★ ★ ★ ★ - Enterprise ★★ ★★★ ★★★★★ ★★★ ★★★★★ ★★ ★★★★★ ★★★★★ ★★ - Industrial ★★★★★★ ★★★ ★★★★★ ★ ★★★ ★★★ ★★★ ★★★★★ ★★★ IP5xIP6x Telecom ★★★★★ ★★★ ★★★★★ ★★★ ★★★ ★★★ ★★★ ★★ ★★★ IP6x Requirements: Low ★/Middle ★★/High ★★★   Consumer Grade Deployment location: homes, suburban areas, islands Deployment type: indoor Purpose: 5G wireless transmission to replace wired transmission Benefits: increased bandwidth, fast deployment, reduced cost of laying wires Network environment: private and public Applications: MHN (mobile hotspot network), AP (access point)   Enterprise Grade Deployment location: office, bank, shopping mall, campus Deployment type: indoor Purpose: optimized user experiences and services Benefits: increased bandwidth, high performance, latency, and stability Network environment: private and public Applications: WIPS, SASE, MHN   Industrial Grade Deployment location: factory, smart cities, healthcare, sports event video streaming Deployment type: indoor, semi-outdoor and outdoor Purpose: optimized network bandwidth and performance, ultra-low latency, Quality of Service (QoS Benefits: stability and increased security Network environment: private Applications: Network Slicing, PoE Control, Firewall, IoT Gateway   Telecom Grade Deployment location: utility pole, smart traffic lights and control Deployment type: indoor, semi-outdoor and outdoor Purpose: consistent and stable network performance Benefits: stability and increased security Network environment: indoor, semi-outdoor and outdoor Applications: 5G Network Slicing, Network-in-a-box, 5G-V2X   Solution Realizing the downsides of too many alternatives on the market and customers’ confusion, NEXCOM provides clarity by tailoring its products to cater to diverse application grades and settings for 5G FWA applications, suitable for deployment in both private and public networks. NEXCOM's range of 5G FWA appliances includes a selection of desktop units and 1U servers, categorized according to CPU performance and offering various wireless and wired connectivity options.   NEXCOM's desktop uCPEs are designed with both RISC and x86 architectures and are available either as a complete solution package with network OS or as white-box options for companies with own software research and development resources.   The entry-level appliance in the desktop 5G FWA lineup is the Arm-based uCPE - DTA 1376. This device is equipped with an NXP ® Layerscape® 4 cores processor that incorporates DPAA (data path acceleration architecture) to deliver a comprehensive set of networking accelerations, effectively integrating all facets of packet processing. DTA 1376 features seven 1GbE copper ports for Ethernet connectivity and offers optional support for 5G FR1 and Wi-Fi connectivity.   The mainstream appliance in the desktop 5G FWA lineup is Intel-based uCPE – DTA 1164W Series. Powered by Intel Atom® C3436L 4 core CPU and featuring a maximum of 16 GB of DDR4 ECC memory, M.2 SATA 2242 Key M 8GB SSD, it supports six 1GbE RJ45 copper ports, two 1Gb ports , Wi-Fi 6 and PoE, capable of providingup to 30W (802.11at) with a 72W 54V PoE power adaptor.   The Intel-based uCPE – DFA 1163 Series stands out as the highest-performing unit among the 5G FWA desktop uCPE lineup. It is equipped with an Intel Atom® C3558R/C3758R processor, boasting 4 or 8 cores respectively. This professional uCPE integrates a 10GbE SFP+ fiber LAN port for upstream data transmission to back-end Ethernet switches and onward to central servers. It also features copper ports with varying link speeds, including two 2.5GbE RJ45 ports and eight 1GbE Ethernet switch ports, enabling Ethernet services for IoT devices, such as VLAN and QoS. In terms of wireless connectivity, the DFA 1163M/Q SKUs stand out FWA product line with its support not only for Wi-Fi and 5G FR1 but also for 5G FR2 (mmWave).   The industrial grade DIN rail for 5G FWA applications - ISA 141 – is designed for deployments in relatively harsh environments. Powered by Intel’s quad-core Atom® processor, it is a compact, fanless appliance equipped with three 1GbE copper ports for network connectivity with one fiber combo port. The compact DIN rail design allows ISA 141 to be easily embedded in existing network infrastructure; while the out-of-band (OOB) management function enables IT personnel to maintain the devices remotely, guaranteeing consistent, high-performance operation. Its exceptional feature set includes dual Wi-Fi and dual 5G for concurrent connectivity and wireless load balancing, ensuring highly adaptable and advanced wireless connectivity.   The performance of each 5G FWA uCPE was tested through the Transmission Control Protocol (TCP) standard. The tests were performed at the NEXCOM office through Amari Callbox, 3GPP compliant eNB/gNB and EPC/5GC. The topology is shown in Figure 1.   Figure 1. 5G FR1 NSA/SA Test Topology   In 5G FR2 NSA mode, NEXCOM uCPE boxes underwent testing with a 3CC configuration, whereas 5G FR1 SA and NSA utilized the maximum Callbox capacity of 4CC. In this context, 3CC and 4CC denote the number of aggregated carriers employed for testing, dictated by test equipment configuration and network requirements. The outcomes are integral to understanding the uCPEs' performance under realistic and demanding conditions.   The test primarily emphasized download capabilities, allocating an average of 70% of Amari Callbox resources for this purpose. Meanwhile, approximately 20% were reserved for upstream tasks, and the remaining 10% were allocated for other functions. The achieved results for each 5G FWA uCPE were standardized and are presented in Mbps in TABLE II.   TABLE II5G FWA PRODUCT PORTFOLIO, UPLINK AND DOWNLINK SPEED TEST RESULTS and GRADE MAPPING     In the 5G FR1 testing, the four DUTs utilize 5G modules sourced from diverse manufacturers. While the 5G FR2 NSA DUTs leverage two specific 5G modules: the X55 and X62. The X55 module provides compatibility with 3GPP Release 15, while the X62 module - an entry-level solution - supports 3GPP Release 16 with an exceptional cost-performance ratio. For a more in-depth understanding of each uCPE box's testing configuration and results, kindly request further information from NEXCOM representatives.   Overall tests prove that each of the tested appliances is ready for 5G FWA deployments in both SA (Stand Alone) and NSA (Non Stand Alone), i.e. public and private network environments.   Conclusion 5G FWA uCPE applications are boundless: from enabling real-time data processing for smart cities to ensuring mission-critical communications in industrial settings, and from revolutionizing healthcare with telemedicine solutions to providing seamless connectivity in remote areas. The impact of 5G FWA uCPE ensures reliable, low-latency, and high-bandwidth connections, and penetrates across diverse sectors, driving innovation and progress.   NEXCOM provides a diverse 5G FWA uCPE range tailored for various sectors and use cases. Each appliance comes with predefined features and expandable space, allowing customers to select additional options for a customized uCPE that suits their requirements. To make it simple, NEXCOM’s 5G FWA uCPE is also integrated with a light-weighted network OS for easy setting & control, enabling customers to concentrate on their applications without worrying about complex networking configurations.
card title
2024/12/17
Blogs
NEXCOM

Accelerating Data Transfer Efficiency with Next Generation Cyber Security Appliance

The Trend The world goes digital. This statement is no news anymore but a way of life. We are seeing big data generated every second online in exponential volume and speed. According to the forecast, just in a few years from now the total volume of information will be more than doubled: from 75 zettabytes (ZB) in 2021 to 175ZB in 2025[1].   Gadgets of personal use (cell phones, laptops, PCs) are hitting a record high for their storage and memory capacity, together with more cloud services available on the market. The same growing demand has been seen in the commercial sector as well, as evidenced by hybrid clouds of different scales which are being built by enterprises and institutional organizations large and small, either on their own or commissioned by service providers.   Data continuously evolve with technology. Pure analog has given way to digital signals decades ago. To transport its sheer volume nowadays is in itself a formidable task, and critical data must be shielded with another layer of security during transport. Cyber security, therefore, becomes an indispensable part before data reach the final destination, even more so nowadays when daily activities go online.   The Challenge How to integrate a new solution into an existing legacy network infrastructure has always been a big headache for IT professionals. A painless upgrade is ideal but not always realistic. More often than not, partial downtime is necessary. As a result, organizations have only one question on their plate: whether they are ready to move further aligned with the latest tendencies or to step aside.   For those who want to stay rock-solid, it is important to find the appliance that can enable more effective and secure network management. By effectiveness here, fast transfer, analysis, store a bigger quantity of data are meant. And with proper network management tools, enhanced network security and accessibility can be provided.   NEXCOM Solution NEXCOM proudly introduces a new appliance to enhance its cyber security product line – NSA 5190. It is a new generation 1U rackmount appliance with the newest Intel® Core™ processor and the latest PCIe 4.0 interface. NSA 5190 is a modular, flexible network solution, which will ideally fit into SD-WAN, web monitoring, load balancing, and network virtualization deployments.   12th Gen Intel® Core™ processor (former code-named, Alder Lake S) brings additional computing power to proceed with bigger volumes and heavier workloads. It became possible due to a combination of performance- and efficient-cores in a single CPU, or P cores and E cores respectively[2]. The hybrid architecture achieves higher performance with less power consumption. The CPU also offers large caches to store data so that requests for data can be carried out faster.   Another important capability to highlight is the Intel® 600 series chipset that brings additional expansion options and value-added features. Several examples include, integrated MAC, Intel® Rapid Storage Technology, Intel® Trusted Execution Technology, and more.   Intel® Rapid Storage Technology provides enhanced data protection and expandability. Regardless of the system operating with one or multiple hard drives, users can experience the benefits of both enhanced performance and lower power consumption. Moreover, under the condition that more than one drive is used, additional protection against data loss in the event of hard drive failure is available.   Besides new capabilities brought by the processor, when compared with previous generation appliances of the same product line there is a key advantage in memory speed and capacity. NSA 5190 supports four DDR4 2666/3200 DIMM, with a maximum memory of 128GB, which is twice its predecessor.   NSA 5190 also features an upgrade in the LAN connector interface from PCIe 3.0 to PCIe 4.0. The greatest advantage of PCIe 4 over PCIe 3 is in its speed, it doubles the per-lane bandwidth to 2 gigabytes per second and is backward and forward compatible. By adopting dedicated LAN modules, NSA 5190 proves itself as a highly configurable networking appliance.   Finally yet importantly, flexibility. With decades of RD experience, NEXCOM mastered designing scalable multifunctional appliances for different application scenarios. NSA 5190 is not an exception. The mainboard is designed with an edge connector for an add-on card. The choice of card to be installed depends on customers’ requirements; it could be either FPGA, AI, or smart NIC card. Each provides its additional capabilities and serves its purpose.   Conclusion The evolvement of technologies brings new possibilities yet new challenges, and NEXCOM’s newly released 1U rackmount - NSA 5190 - is ready for both. Its futureproof design, with significantly increased memory capacity, data transfer speeds, and a set of optional features, makes NSA 5190 a perfect appliance for various use cases in businesses of all scales. NSA 5190 can manage heavy workloads without wearing out the CPU and is able to proceed with big data volumes in a shorter time.     NSA 5190 1U Rackmount Appliance with 12th Gen Intel® Core™ Processor, 2 x 1GbE RJ45 ports, and 4 x LAN Module Slots   12th Gen Intel® Core™ processor PCH: R680E 4 x DDR4 2666/3200 non-ECC/ECC UDIMM, up to 128GB 1 x M.2 2280 Key M (SATA) 1 x TPM module 1 x PCIe4 x4 connector for low profile riser card 2 x 1GbE RJ45 ports 4 x LAN module slots  
card title
2024/12/17
Blogs
NEXCOM

AI Shield to Protect Network from Cyber Threat

The Trend In an era defined by rapid technological advancement and digital transformation, the landscape of cybersecurity is undergoing fundamental change. As cyber threats increase, enterprises face mounting challenges in defending their assets against an ever-expanding array of attacks. High-profile data breaches, coupled with a global shortage of skilled cybersecurity professionals, underscore the urgent need for innovative solutions capable of safeguarding sensitive data and critical infrastructure. Against this backdrop, the convergence of artificial intelligence (AI) and cybersecurity emerges, promising to revolutionize the way to detect, respond to, and mitigate cyber threats.   The surge in requests for implementing AI algorithms into cybersecurity is driven by several compelling trends. From the constant attacks of advanced cyber threats to the pressing need for regulatory compliance, IT personnel worldwide are seeking intelligent and adaptive security solutions capable of keeping pace with the evolving threat landscape. Furthermore, the integration of AI into security operations empowers organizations to automate routine tasks and achieve greater operational efficiency.     The Challenge As companies start their journey of implementing AI cybersecurity hardware, they encounter countless struggles that demand innovative solutions and strategic approaches. The primary obstacle is the complexity of integrating AI hardware into existing IT infrastructure seamlessly. IT professionals must navigate compatibility issues, interoperability concerns, and the need for seamless integration with established security systems. Additionally, the resource-intensive nature of AI cybersecurity requires careful consideration of computational resources, memory allocation, and storage capacity to ensure optimal performance and scalability.   Moreover, the sensitive nature of data processed by AI cybersecurity hardware underscores the critical importance of privacy and security. IT professionals face the tough task of safeguarding sensitive data against breaches, unauthorized access, and compliance violations while harnessing the power of AI for threat detection and mitigation. Balancing the need for robust data protection measures with using data effectively for AI-driven insights is a delicate challenge, requiring the implementation of rigorous encryption and access control techniques.   NEXCOM Solution NEXCOM offers a solution to empower organizations to explore the potential of AI-driven cybersecurity to fortify network defense, protect digital assets, and secure a safer future in the digital age.   NEXCOM's NSA 7160R-based cybersecurity solution addresses the multifaceted challenges in implementing AI hardware in cybersecurity operations. Leveraging a modular design and sharing the same form factor with the previous generation of its product family, NEXCOM's solution mitigates integration complexity by seamlessly integrating with existing IT infrastructure, minimizing compatibility issues.   Furthermore, NSA 7160R is designed with scalability in mind, enabling companies to navigate resource constraints effectively by dynamically allocating computational resources, optimizing memory usage, and scaling storage capacity to meet evolving operational demands. Customers can choose different DDR5 speeds based on their budget and requirements. A flexible configuration of LAN modules enables up to 2.6TB Ethernet connectivity per system or allows up to 128GB of additional storage through storage adaptors.   By prioritizing performance optimization, NEXCOM's solution enables enterprises to achieve superior detection accuracy, response times, and scalability, delivering actionable insights and proactive threat mitigation capabilities to safeguard against emerging cyber threats effectively. NSA 7160R supports the latest dual 5th Gen Intel® Xeon® Scalable processors and is backward compatible with 4th Gen Intel®Xeon® Scalable processors, allowing customers to scale up both in CPU core count and processor generation.   In addressing the critical concerns of data privacy and security, NEXCOM's solution implements hardware-based robust encryption protocols, ensuring the confidentiality, integrity, and availability of sensitive information processed by AI. A series of various accelerators include Intel® Crypto Acceleration, Intel® QuickAssist Software Acceleration, Intel® Data Streaming Accelerator (DSA), Intel® Deep Learning Boost (Intel® DL Boost), Intel® Advanced Matrix Extensions (AMX), and more. [1]The set of accelerators may vary depending on selected processor SKU.   NSA 7160R empowers IT personnel to proceed with deployments confidently. To validate its efficacy in AI cybersecurity, NEXCOM conducted a series of tests comparing two configurations powered by dual 4th Gen Intel® Xeon® Scalable processor (DUT 1) and dual 5th Gen Intel® Xeon® Scalable processor (DUT 2). CPU SKUs’ chosen for the testing are correlated by performance and core count for fair and unbiased comparison. The rest of the configurations were of utmost equivalence. Detailed test configuration is shown in TABLE I.   For the tests, two open-source security AI models were chosen: MalConv and BERT-base-cased.     TABLE IDUT 1 AND DUT 2 TEST CONFIGURATIONS Item DUT1 DUT2 4th Gen Intel® Xeon®-based 5th Gen Intel® Xeon®-based CPU 2 x Intel® Xeon® Gold 6430 processors 2 x Intel® Xeon® Gold 6530 processors Memory 252GB16 (8+8) x 32G DDR5 4800 RDIMMs SSD 512GB1 x 2.5" SSD SATA III Storage 1.2TB4 x M.2 2280 PCIe4 ×4 4TB NVMe modules in slot 2 Ubuntu 22.04 Kernel v5.19     Test Results for MalConv AI Model MalConv (Malware Convolutional Neural Network) is an deep learning-based approach used in cybersecurity for the purpose of malware detection.   While traditional malware detection methods rely on signatures or behavior analysis, vulnerable to circumvention by polymorphic or unseen variants, MalConv utilizes convolutional neural networks (CNNs) to directly analyze executable file binary data. Trained on both malicious and benign files, MalConv learns to distinguish between them based on binary data patterns. This enables MalConv to detect polymorphic or unseen malware variants by identifying malicious characteristics within the binary code itself, bypassing reliance on signatures or behavior analysis.   Latency and throughput in the MalConvn AI model were tested on both DUTs. Latency and throughput in MalConv testing provide valuable insights into its performance, responsiveness, scalability, and efficiency in AI cybersecurity applications. Latency measurement helps determine the time taken by MalConv to analyze an input file and provide a classification (malicious or benign), while throughput measurement evaluates the ability of MalConv to process multiple files or data streams simultaneously within a given time frame.   The results of latency and throughput MalConv tests for different opt methods are shown in TABLE II.   TABLE IIMALCONV AI MODEL TEST RESULTS FOR LATENCY AND THROUGHPUT Framework Opt Method Model Platform Latency(ms) Throughput(samples/second)/(FPS) tensorflow 2.15.0 INC 2.2 Malconv.inc.int8.pb DUT 1 12.15 82.3 Malconv.inc.int8.pb DUT 2 11.18 89.47 onnxruntime 1.16.3 INC 2.2 Malconv.inc.int8.onnx DUT 1 16.55 60.43 Malconv.inc.int8.onnx DUT 2 14.47 69.1   Based on the achieved results we can conclude that 5th Gen Xeon based server shows better results in both opt methods and both test items (latency and throughput).   Lower latency is essential for real-time threat detection, enabling rapid response to security incidents. 5th Gen Xeon DUT shows 8% lower latency in tensorflow 2.15.0 framework by spending 0.97ms less than 4th Gen Xeon DUT. 5th Gen Xeon DUT shows 13% lower latency in onnxruntime 1.16.3 framework by spending 2.08ms less than 4th Gen Xeon DUT.   Figure 1. MalConv AI model test results for latency     Higher throughput indicates greater volume-handling capacity, which is essential for analyzing large datasets efficiently.   5th Gen Xeon DUT shows 9% higher throughput in tensorflow 2.15.0 framework by analyzing 7.17 more samples per second than 4th Gen Xeon DUT. 5th Gen Xeon DUT shows 14% higher throughput in onnxruntime 1.16.3 framework by analyzing 8.67 more samples per second than 4th Gen Xeon DUT.   Figure 2. MalConv AI model test results for throughput     Test Results for BERT-base-cased AI Model BERT (Bidirectional Encoder Representations from Transformers) is a powerful natural language processing model developed by Google. The "base" version refers to the smaller and computationally less expensive variant of BERT compared to its larger counterparts like BERT-large. The "cased" variant retains the original casing of the input text, preserving capitalization information.   In AI cybersecurity, BERT-base-cased offers a versatile framework for natural language understanding in cybersecurity applications. This model can be utilized for various tasks such as threat intelligence analysis, email and message classification, malicious URL detection, incident response and threat hunting, and more.     During the tests static, dynamic and FP23 BERT-base-cased model latencies of each DUT were analyzed. The tests were conducted using 1 and 4 active cores to determine if there would be any improvement with increased core involvement. The results are shown in TABLE III.   Static model latency refers to the time it takes for the pre-trained Bert-base-cased model to process input data and make predictions without further adaptation. Dynamic model latency measures the time required for Bert-base-cased to adapt or fine-tune itself during runtime based on evolving threat conditions or changes in the operating environment. FP23 model latency represents the latency of Bert-base-cased when configured to maintain a specific false positive rate of 23%. Minimizing FP23 model latency allows security teams to respond more quickly to security incidents, reducing the time and resources required for investigation and mitigation.     TABLE IIIBERT-BASE-CASED AI MODEL TEST RESULTS FOR STATIC, DYNAMIC AND FP23 LATENCIES Framework Opt Method Core Used for Test Platform Static qatmodel Latency(ms) Dynamic qat model Latency(ms) FP32model Latency(ms) Pytorch 2.1.0 IPEX 2.1.100 1 Core DUT 1 97.5 472.46 862.99 DUT 2 86.28 327.53 726.27 4 Cores DUT 1 29.84 118.94 261.3 DUT 2 25.08 98.78 214.32     Based on the achieved results we can conclude that 5th Gen Xeon based server shows better results in all 3 test items (static, dynamic and FP23 BERT-base-cased model latencies) and both test setups for CPU resource allocations (1 and 4 cores).   Lower static model latency is desirable for real-time threat detection, enabling rapid analysis of text data such as security alerts, email content, or chat messages. Longer latency may introduce delays in processing, affecting the responsiveness of security operations and hindering timely threat mitigation efforts. 5th Gen Xeon DUT shows 12% lower latency in 1 core scenario by spending 11.22ms less than 4th Gen Xeon DUT. 5th Gen Xeon DUT shows 16% lower latency in 4 cores scenario by spending 4.76ms less than 4th Gen Xeon DUT.   Figure 3. BERT-base-cased AI Model Test Results for Static Latency     Lower dynamic model latency enables the model to respond more quickly to emerging threats and shifting attack patterns, enhancing its effectiveness in cybersecurity operations. 5th Gen Xeon DUT shows 31% lower latency in 1 core scenario by spending 144.93ms less than 4th Gen Xeon DUT. 5th Gen Xeon DUT shows 17% lower latency in 4 cores scenario by spending 20.16ms less than 4th Gen Xeon DUT.   Figure 4. BERT-base-cased AI Model Test Results for Dynamic Latency     Achieving lower FP23 model latency is essential for minimizing false positives while maintaining high detection accuracy. This ensures that security teams can focus their efforts on genuine threats without being inundated by false alarms. 5th Gen Xeon DUT shows 16% lower latency in 1 core scenario by spending 136.72ms less than 4th Gen Xeon DUT. 5th Gen Xeon DUT shows 18% lower latency in 4 cores scenario by spending 46.98ms less than 4th Gen Xeon DUT.   Figure 5. BERT-base-cased AI Model Test Results for FP23 Latency     Test Summary Both devices successfully executed AI security software, with the platform utilizing the 5th Gen Intel® Xeon® Scalable processor showcasing superior performance over the server employing the 4th Gen Intel® Xeon® Scalable processor. Both platforms demonstrated efficiency in latency and throughput for security-related tasks, and proved ready for AI cybersecurity.   Conclusion As the cybersecurity landscape continues to evolve, IT personnel must remain proactive in adapting to emerging threats and leveraging the latest advancements in AI technology. Integrating AI algorithms, such as MalConv and Bert-base-cased, into cybersecurity operations represents a significant advancement in the fight against cyber threats.   NEXCOM’s NSA 7160R servers offer enhanced threat detection, rapid response times, and improved operational efficiency, addressing the ever-evolving challenges faced by enterprises in safeguarding their digital assets. As both tested platforms demonstrate their significant contribution to addressing cybersecurity workloads, the decision on which platform to choose ultimately rests with the customer, who can select based on their specific requirements and the performance achieved.   Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
card title
2024/12/17
Blogs
NEXCOM

NEXCOM Servers Provide Edge Video AI Analytics and Processing

Intel® Edge Video Infrastructure reference design provides smart city edge video processing running on NEXCOM NSA 7160R; tests of the solution show it meets Intel® EVI performance metrics for AI and storage performance.   Intel® Network Builders Community partner, NEXCOM has tested the Intel EVI 2.0 software (that is included in the Intel EVI reference design) on its NSA 7160R, a powerful three-in-one server equipped with dual 4th Gen Intel® Xeon® Scalable processors for high performance video processingand AI inference, high-bandwidth LAN modules, and a high-capacity NVMe storage module.   The test results used Intel EVI 2.0 test protocols to examine the throughput of NSA 7160R across four workloads that are important for the performance of computer vision applications:   Image/Video Storage and Retrieval AI Inferencing (Image/Video) Feature Matching Clustering   As the tests in this paper will show, the NEXCOM NSA 7160R with the Intel EVI reference design creates a system that is capable of efficiently processing edge video server workloads.   The solution brief was created by Intel® Corporation. To read the full story, please download the PDF.      Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
card title
2024/12/17
Blogs
NEXCOM

NViS 5704: A Powerful 1U Rackmount NVR for Video Stream Analysis, Driven by 13th Gen Intel® Core-i Processor

Trend In the Workstation network video recorder (NVR) market, customers are no longer solely concerned with CPU performance and the number of cameras a system can support. Nowadays, they are also interested in the system’s ability to perform streaming analytic tasks while supporting a certain number of cameras. Therefore, it is important to consider whether it’s possible to enhance the AI NVR’s video streaming processing capabilities, such as by adding an image accelerator card (graphic card). Additionally, having a high-performance CPU with better optimization for running and executing demands can meet customers’ high expectations. Finally, NVMe has become increasingly popular in recent years due to the high demand for low latency and high-speed data transmission for software operating of data access. In the present landscape, deep learning has become a standard feature in NVRs. Deploying model inference in NVR trained by CNN, RBFN, MLP, DBN and more is off-the-shelf function. Consequently, the task of selecting the appropriate, accurate, and real-time accelerator solution for gradual integration into the system has evolved into a crucial requirement within the surveillance market.   Challenge Currently, a significant portion of the existing 1U Rackmount NVR market utilizes motherboards smaller than Micro ATX for incorporating supplementary PCIe cards. This choice is driven by the constraints posed by the inability to accommodate full-height and full-length cards. Moreover, the board's dimensions prevent the simultaneous inclusion of M.2 Key B and M.2 Key M interfaces. Complicating matters further, older Intel® platform CPUs can solely support PCIe x16 interface output. Consequently, enabling support for M.2 Key M PCIe 4.0 x4 necessitates a reduction in the specification of the PCIe slot to PCIe x8. This situation imposes limitations on the addition of a graphics card.   Solution Thanks to the significant enhancements in Intel®'s 13th Gen design, there has been a marked improvement in the CPU PCIe pipeline. This advancement includes a combination of PCIe 5.0 x16 and PCIe 4.0 x4 interfaces. This innovation allows for the inclusion of a PCIe 5.0 x16 slot for adding accelerator cards, along with an additional NVMe support slot (Intel® 13th CPU PCIe pipeline is presented in Table I). This technological leap also enhances NVMe performance by enabling PCIe 4.0 x4 support (The performance test result between PCIe 3.0 x4 and PCIe 4.0 x4 are presented in Table II). With this upgrade, NViS5704 no longer utilizes standard Micro ATX form factor design, which now enables it to be equipped with a PCIe slot supporting both PCIe 5.0 x16 and M.2 Key M PCIe 4.0 x4 configurations. Consequently, users have a broader range of options to choose from, catering to their specific requirements.   TABLE I: 13th Gen Intel® Core™ CPU PCIE PIPELINE   TABLE II: BENCHMARK BY WINDOWS 10 + CRYSTAL DISK MARK 5.2.1 WD SN740 NVMe SSDM.2 2280 Key M PCle Gen4 x4 PCIe 3.0 x4 PCIe 4.0 x4 WD SDDPNQD-256G Read [MB/s] Write [MB/s] Read [MB/s] Write [MB/s] Seq Q32T1 3565 1980 4081 2026 4K Q32T1 350.2 176.2 887 1166 Seq 2756 1979 3613 2026 4K 70.17 148.8 87.44 317.6   With the inclusion of an additional PCIe 4.0 x4 interface, users have the capability to integrate accelerator cards like the Hailo-8 M.2 interface module. This facilitates the seamless adoption of various Hailo-8 modules, effectively boosting video streaming inference performance. Such require compatibility with higher specifications of the PCIe 4.0 x4 speed specification. On the other hand, if the PCIe pipeline specification fails to meet the requirements of accelerator cards, it could result in a significant reduction in inference performance. (Hailo-8 performance test result between PCIe 3.0 x4 and PCIe 4.0 x4 are presented in Table III).   TABLE III: HAILO-8 PERFORMANCE TEST RESULT BETWEEN PCIe 3.0 x4 AND PCIe 4.0 x4*Benchmark by YOLOv5m and SSD_MobileNet_V1 of object detection     Hailo-8 with 13th Gen Intel® Core™ i9-13900E M.2 2280 Key M Gen4 PCle x4 PCIe 3.0 x4 PCIe 4.0 x4 Object Detection Input Resolution FPS Input Resolution FPS YOLOV5m 640 x 640 178 640 x 640 217 SSD_MobileNet_V1 300 x 300 862 300 x 300 1053   Conclusion NEXCOM NViS 5704 NVR, powered by Intel® 13th Gen Core-i processor, presents a significant advancement in overall performance. The CPU incorporates thread director technology, enabling partial control of the OS scheduler. This innovation leads to more energy-efficient and power-saving software development, enhancing the execution arrangement of applications. Additionally, NEXCOM NViS 5704 NVR has been enhanced with an additional PCIe 4.0 x4 interface to support high-speed NVMe and accelerator cards. This enhancement propels 1U Rackmount NVRs to a higher level, expanding their scope beyond video recording and display functionalities. These capabilities empower security office to efficiently monitor and analyze video streams, identifying potential threats and recognizing objects or individuals of interest in real-time.  
card title
2024/12/17
Blogs
NEXCOM

NViS 66162: Revolutionizing Workstation NVRs for Enhanced Video Processing and Analytics

Trend In the Workstation network video recorder (NVR) market, customers are no longer solely concerned with CPU performance and the number of cameras a system can support. Nowadays, they are also interested in the system’s ability to perform streaming analytic tasks while supporting a certain number of cameras. Therefore, it is important to consider whether it’s possible to enhance the AI NVR’s video streaming processing capabilities, such as by adding an image accelerator card (graphic card). Additionally, having a high-performance CPU with better optimization for running and executing demands can meet customers’ high expectations. Finally, NVMe has become increasingly popular in recent years due to the high demand for low latency and high-speed data transmission for software operating of data access.   Challenge Intel’s CPU design featured a single code with multithreading, with no significant difference between big and small cores. As a result, it’s challenging for software developers to optimize execution. Moreover, network video recorder systems with built-in PoE ports often have limited space, making it difficult to add an additional video accelerator card. To address this issue, it’s essential to support rackmount and wall mount installations in the Workstation NVR market. In software development, developers prefer to execute high-speed and low-latency data access directly through NVMe on the CPU pipeline. However, the previous design required the use of a PCIe x16 pipeline on the CPU to import NVMe design, forcing developers to choose between installing a PCIe x16 graphic card or NVMe.   Solution With the evolution of generations and the gradual maturation of microarchitecture, 12th Gen Intel® Core™ processor represent a significant change in CPUs. They utilize a mixture of performance and efficiency cores, offering a CPU hybrid “big.LITTLE architecture,” which benefits the scheduler in handling complex, multicore workloads and allows the cores to be managed efficiently. Performance tests conducted between the Intel® Xeon® E-2278GE and Intel® Core™ i9-12900E are shown in Table I.   TABLE IBENCHMARK BY WINDOWS 10 + PASSMARK-PERFORMANCE TEST 10.2 Intel® Xeon® E-2278GE(Intel® Coffee Lake-S) Intel® Core™ i9-12900E(Intel® Alder Lake-S) PassMark-Performance Pass Mark Rating 3160.9 Pass Mark Rating 3982.2 CPU Mark 13694.4 CPU Mark 30780.3 2D Graphics Mark 359.5 2D Graphics Mark 399.8 3D Graphics Mark 1355.3 3D Graphics Mark 1876.4 Memory Mark 2834.4 Memory Mark 3437.1 Disk Mark 20290.3 Disk Mark 24023.5     Based on the performance test results above, the new Intel® Core™ i9-12900E claims to offer 25% better performance compared to the Intel® Xeon® E-2278GE. Additionally, its graphic decoding capabilities enable nearly double the real-time display performance. Table II displays the test results for real-time display performance between the Intel® Xeon® E-2278GE with Intel® UHD Graphics 630 and the Intel® Core™ i9-12900E with Intel® UHD Graphics 770.   TABLE IIBENCHMARK BY WINDOWS 10 + GEEKS FURMARK-PERFORMANCE Intel® Xeon® E-2278GE/Intel® UHD Graphics 630(Intel® Coffee Lake-S) Intel® Core™ i9-12900E/Intel® UHD Graphics 770(Intel® Alder Lake-S) Geeks Furmark-Performance Test@720P Test@1080P Test@720P Test@1080P 15 FPSScore: 897 9 FPSScore: 526 27 FPSScore: 1588 16 FPSScore: 915     Furthermore, the new design of Intel® Core™ i9-12900E has made significant improvements to the CPU PCIe pipeline. It now offers a mixture of PCIe x16 and PCIe x4 interfaces, which allows for the addition of an accelerator card with a PCIe x16 slot and an additional NVMe support. This new technology enhances NVMe performance achieving PCIe Gen4 x4 support, resulting in a performance that is twice as fast when compared to generation PCIe Gen3. The test results for the performance of the new design’s PCIe Gen4 4.0 versus PCIe Gen3 2.0 are shown in Table III.   TABLE IIIBENCHMARK BY WINDOWS 10 + CRYSTAL DISK MARK 5.2.1 WD SN740 NVMe SSDM.2 2280 Key M PCle Gen4 x4 Intel® Xeon® E-2278GE(Intel® Coffee Lake-S) Intel® Core™ i9-12900E(Intel® Alder Lake-S) WD SDDPNQD-256G Read [MB/s] Write [MB/s] Read [MB/s] Write [MB/s] Seq Q32T1 3565 1980 4081 2026 4K Q32T1 350.2 176.2 887 1166 Seq 2756 1979 3613 2026 4K 70.17 148.8 87.44 317.6     Conclusion The new Intel® Core™ i9-12900E CPU offers a significant improvement in overall performance. The CPU features thread director technology, which allows for partial control of the OS scheduler. This makes software development more energy-efficient and power-saving when executing application arrangements. The GPU is equipped with Intel's latest Intel® UHD Graphics 770, and as a whole, the display acceleration is significantly enhanced. This will take workstation network video recorders to another level, allowing them to focus not only on video recording and display functions but also on enabling developers to incorporate more video stream analysis capabilities, thus increasing their value. The NEXCOM NViS 66162 NVR has an advantage in breaking through the limitation of the workstation and creating its value.  
card title
2024/12/02
Press Release
NEXCOM

NEXCOM's Cutting-Edge Technology Recognized: Edge AI Mobile Computer and Supercapacitor UPS Clinch 2025 Taiwan Excellence Award

NEXCOM today announced that its edge AI mobile computer for in-vehicle and rail — “IP67 AI Intelligent In-Vehicle/Railway Computer ATC 3750-IP7-8M” — and “In-vehicle/Railway Supercapacitor UPSVTK-SCAP” have won the 2025 Taiwan Excellence Award, demonstrating global leadership in automotive technology.   IP67-rated edge AI mobile computer for rail/in-vehicle: ATC 3750-IP7-8M The ATC 3750-IP7-8M, powered by the NVIDIA® Jetson AGX Orin™ system on module (SOM), delivers up to 275 TOPS and supports a wide range of autonomous machines and advanced in-vehicle applications such as advanced driver assistance systems (ADAS), automatic number plate recognition (ANPR), autonomous mobile robots (AMRs), machine learning (ML), intelligent transportation systems (ITS), and railway safety. Moreover, the product has obtained automotive E-mark and railway EN50155 certifications and achieves IP67 protection, making it one of the industry’s first high-end edge AI in-vehicle/railway computer integrating intelligent image recognition and AI video analytics technology.   Supercapacitor UPS: VTK-SCAP VTK-SCAP is an advanced uninterruptible power supply (UPS) developed by NEXCOM. Compared with traditional lithium-battery UPS, VTK-SCAP utilizes supercapacitors and operates in a wide temperature range of -35°C to 80°C, effectively addressing the extreme temperature variations in vehicle environments and meeting the needs of applications requiring delayed shutdown and backup data. It can be flexibly expanded with up to one master device and three secondary ones to support up to 200W computer systems, depending on the end customer's usage scenario and power requirements.VTK-SCAP has been certified by E13 mark and EN50155, making it suitable for in-vehicle and railway applications and providing clients with a flexible and comprehensive mobile computing solution. NEXCOM is committed to providing comprehensive AIoT digital transformation solutions. The company’s mobile computer series has repeatedly won national awards, demonstrating its global leadership in technology. NEXCOM will continue to invest in resources to assist global customers in providing superior intelligent in-vehicle solutions, working together towards a smart and sustainable future.   Learn more about NEXCOM’s mobile computing products:https://www.nexcom.com/Products/mobile-computing-solutions   Taiwan Excellence Award-Winning Products: IP67 AI Intelligent In-Vehicle/Railway Computer ATC 3750-IP7-8M In-vehicle/Railway Supercapacitor UPS VTK-SCAP
card title
2024/10/24
Case Studies
NEXCOM

Riverside Revolution: NEXCOM's Neu-X102-N50 Transforms Tourist Information

Along the bustling banks of the Thames in London, sleek digital totems now stand as silent guides for curious visitors. These modern sentinels display real-time boat schedules, weather updates, and a wealth of local information, transforming the riverside experience. At the heart of this smart city's evolution lies NEXCOM's powerful Neu-X102-N50, the driving force behind these innovative information hubs.   These innovative information totems are revolutionizing visitor experiences in waterfront destinations citywide. While their exteriors may vary to suit local aesthetics, their core remains constant: NEXCOM's powerful edge computing system, the Neu-X102-N50.   At the heart of these totems lies impressive technology tailored for outdoor applications. The Neu-X102-N50 boasts an Intel Alder Lake-N N50 processor and up to 16GB of RAM, ensuring smooth performance even in challenging environments. Its ability to operate in temperatures from -5°C to 50°C makes it suitable for diverse climates.   The Neu-X102-N50's technical prowess extends beyond its processor. With support for up to two HDMI ports for playing vivid content, it can deliver eye-catching visuals to attract and inform visitors. Its M.2 & mPCIe slots allow for expandable storage, LTE & Wi-Fi 6 capability, ensuring ample space for rich content and lightning-fast wireless connectivity. These features enable the totems to serve as comprehensive information hubs, capable of handling high-traffic areas with ease in the smart city.   Tourists interact with vibrant 32-inch touchscreen displays, accessing a wealth of information beyond just schedules and weather. Local attractions, dining recommendations, and even real-time air quality data are at their fingertips. The edge computing system's dual 2.5GbE LAN ports and 4G LTE connectivity ensure that this information is always current and readily available. Through a USB light sensor and COM port, the totem can automatically adjust its brightness, ensuring all information remains readable in varying light conditions while contributing to the system's energy efficiency, aligning with modern urban sustainability goals.   For totem operators, remote management capability is key. They can update content and perform system maintenance through LAN or LTE, significantly reducing operational costs and ensuring efficient management.   These edge computing systems improve the visitor experience and provide valuable data insights for urban planning and tourism management. Through cameras connected via USB 3.2 high-bandwidth ports, the Neu-X102-N50 ensures smooth capture and transmission of data, enabling real-time monitoring and analysis of visitor flows. This advanced capability allows city planners and tourism officials to make informed decisions, optimize resource allocation, and enhance overall urban mobility, while maintaining a seamless and enjoyable experience for tourists and locals alike.   The Neu-X102-N50 represents a significant step forward in smart city technology, blending seamlessly into urban landscapes while providing an essential service to tourists and locals alike. As more areas of the city adopt this technology, we can expect to see a transformation in how people interact with and navigate waterfront destinations, ushering in a new era of informed and engaged urban tourism.   Application Diagram  
card title
2024/10/24
Videos
NEXCOM

NEXCOM ATC Series

NEXCOM's ATC series product are build with NVIDIA® Jetson™ modules, NEXCOM offers a wide-ranging product portfolio that caters to various application needs, providing AI performance ranging from 20 TOPS to 275 TOPS. Whether it's enabling intelligent transportation, improving public safety, maximizing production efficiency, predicting abnormal asset condition or optimizing patent recognition, NEXCOM's diverse selection of Jetson™ modules with integrated industrial interfaces plays a pivotal role in deploying AI workload across multiple industries in field.
card title
2024/10/23
Case Studies
NEXCOM

NEXCOM AIEdge-X®500 Edge AI Computing System for Traffic Management in Bangkok

The bustling metropolis of Bangkok, Thailand, stands as a beacon of cultural richness and economic vitality. However, similar to other emerging cities in South East Asia, its growth has led to an unprecedented challenge – traffic congestion. There were traffic lights at about 500 locations in Bangkok. In many areas they were still controlled by timers and did not adjust to actual traffic conditions.   Thailand government launched “Thailand 4.0 project” to Integrate AI into current traffic management systems for cities to optimize road conditions and enhance urban transportation planning.   Solution AIEdge-X®500’s LAN ports are connected to CCTV traffic cameras installed at intersections to record and perform license plate recognition to catch traffic violators who exceeds speed limit, cross red line and motorists who parked their vehicles in no-parking areas.   On top of it, City Administration and Office of Transport, Traffic Policy and Planning developed a traffic management model and use AI to estimate traffic congestion in each hour, analyze bottlenecks and come up with solutions in real time. For example, by adjusting traffic lights in line with traffic volume.   NEXCOM AIEdge-X®500 seamlessly integrates into the traffic signal box, showcasing its remarkable performance even in the challenging conditions of high temperatures and humidity that are characteristic of the subtropical climate. The device operates efficiently within a temperature spectrum of 0°C to 45°C and a humidity range of 10% to 90%.   Powered by Intel® 8th/9th Gen. Core™ processor, the AIEdge-X®500 integrates maximum graphic processing potential with support for large storage and peripheral and internal devices to effectively meet industrial AI requirements, from image processing/optimization to machine/deep learning and machine vision.   This AIEdge-X®500 Edge AI computing solutions are expected to be deployed to 100 other locations in next 2 to 3 years to facilitate traffic management in Bangkok and reduce traffic violations in the city.   Application Diagram