By: Renee Burrows on 4/19/2021 3:11:18 PM
Machine learning is a subset of Artificial Intelligence (AI) that provides systems the ability to automatically generate models (learning) and improve model accuracy from experience without humans explicitly programming. Machine learning and artificial intelligence focuses on the development of software solutions that can access data from many sources and use it to learn for themselves.
The process of learning, or training in machine learning nomenclature, begins with consuming large amounts of data and analyzing them for patterns. The patterns are used to make better decisions in the future based on the past examples that were found in the underlying data set. In theory, the primary aim is to allow the computers to consume this data and learn automatically without human intervention or assistance. In practice, there is typically human intervention involved in >90% of machine learning applications.
Machine learning algorithms are often categorized into two categories, supervised or unsupervised.
Supervised machine learning algorithms apply what has been learned historically, either from previous machine learning applications or via use of model developed by a human. The inputted data usually requires labeled examples to predict future events. Starting from the analysis of a known training dataset, the learning algorithm produces an inferred function to make predictions about the output values. The machine learning system will generally provide targets, or guidelines, on new data after sufficient training. The learning algorithm can also compare its output with the labeled input data to find errors in accuracy and then reiterate to develop a more accurate model. It is this reliance on the accuracy of the labeled input data that generally results in the difficulties of tuning a machine learning algorithm for use within the manufacturing environments, in other words, the model can only be as accurate as the accuracy of the input data.
In contrast, unsupervised machine learning algorithms are used when the digested data used to train is neither classified nor labeled. Unsupervised learning studies how systems can develop an algorithm on its own to describe a hidden structure that may or may not easily be seen from the input data. The machine learning system does not optimize an accurate output (no labeled input data exists to compare against), but it explores the data and can draw inferences from datasets to describe hidden structures from the unlabeled input data.
Semi-supervised machine learning algorithms fall somewhere in the middle of supervised and unsupervised learning as they use a hybrid compilation of both labeled and unlabeled data for training – typically a small amount of labeled data and a large amount of unlabeled data. The systems that use this method can considerably improve learning accuracy. Usually, AI applications in manufacturing leverage the semi-supervised learning approach as the acquired data from processes within manufacturing requires high skill or system knowledge to accurately label in order to train and learn from it. Acquiring this labeled data takes time and energy that is generally very costly in a high-performance manufacturing environment, so supplementing the labeled data with the abundance of unlabeled data is implemented as the unlabeled data typically comes at a much more economical cost.
Combining machine learning with AI in the manufacturing environment is driving existing manufacturing processes to the next level. Reach out to us via our contacts or products page to get a better indication on how AI Manufacturing Solutions can deliver value to our operating margins.
By: Renee Burrows on 4/12/2021 1:20:42 PM
We are happy to announce that our analytics group has kicked off a public Kaggle account registered under @aimfgs to better match our customers to our capabilities. Follow us on one of our Kaggle data competitions or any of our social accounts to find out more! Or better yet, give us a call to see how our team can solve your technical challenges!
By: Felicia Franks on 4/11/2021 6:48:03 AM
AMD says it best, “Break from the render bar. Realize your creative vision with the most powerful 24-core desktop processor in the world”. On this build, our customer required the AMD3960x processor capable of running >4GHz continuously in production while maintaining adequate CPU temperatures. These requirements immediately directed us to a front-to-back case airflow design with a fan-based CPU cooler that outperforms even the leading water-based chillers. We leveraged the Noctua NH-U14S coupled with two front intake fans and a single rear exhaust fan to keep CPU temperatures <80degC at full load.
The AMD3960x is the workhorseheart and soul of their analytics machine. The 24 cores at a base clock of 3.8GHz and a max boost clock of 4.5GHz is enough to drive even the toughest processing tasks. For this machine learning application, we coupled the AMD chipset with an MSI TRX40 PRO motherboard and an NVIDIA RTX4000 to deliver the next level of processing capability via the 88 PCI 4.0 lanes. At AI Manufacturing Solutions, we are proponents of the BIOS software offered by MSI with their PRO series motherboards, giving the user adequate capability to optimize the system yet provide safety thresholds in place to protect the CPU while running processer, or GPU, intensive applications. Here are how the numbers stack up on PASSMARK performance software, overall achieving a 98th percentile rating.
By: Ashley Markel on 4/7/2021 11:10:38 AM
NVMe, or Non-Volatile Memory Express, is a protocol released in 2011 for accessing high-speed storage systems media that came popular with the introduction of Solid-State Drives (SSDs). But what is NVMe and why is it important for machine learning and analytics applications?
As businesses and systems continue to consume and process increasingly more data to support manufacturing, it is important to rethink how this data is captured, preserved, accessed and transformed. And because of the speed, NVMe is revolutionizing how the user configures deep learning systems for data storage, data access, and overall architecture when combined with other more traditional storage methods.
This article will explain what NVMe is and share a deep technical dive into how the storage architecture works. Upcoming blogs will cover what features and benefits NVMe brings businesses, use cases where it’s being deployed today and how customers take advantage of NVMe SSDs. Additionally we will cover our platform Offerings and fully featured flash storage systems for everything from IoT Edge applications to Artificial Intelligence, AI, and machine learning systems.
NVMe builds off Solid State Drive (SSD) technology (similar to RAM) and introduces impressive write speeds of 3500MB/s moving the limit of actual data read/write speeds to the interface used.
NVMe was originally introduced as drop-in replacements to rotary drives, easing the transition for the consumer from slower rotation based hard drives and moving towards newer solid-state drives. The ease of adding SSD to existing SATA ports came at the expense of limiting data transfer to the SATA protocol, currently maxed out between 300-600 MB/s. Companies worked on standardizing the interfaces from 1990’s to the early 2000’s while also introducing speed improvements by leveraging different backplane technologies.
Referencing URTech, “It was not a surprise to anyone that SSD engineers kept working on improvements and today you have three primary options for SSD." It was the necessity to enhance the interface protocol speed to reach the theoretical max of 3500MB/s which naturally transitioned the SSD interface closer to the backplane of the PC; i.e. PCI Express bus. A simple way to look at it - utilizing SSD directly integrated into the backplane via PCI express bus is NVMe technology.
SSD options that exist today:
1. Rotary drives via traditional SATA port
2. SSD drives via traditional SATA port
3. (NVMe) SSD drives via PCI bus
Per Western Digital, The NVMe protocol capitalizes on parallel, low latency data paths to the underlying media, similar to high performance processor architectures. This offers significantly higher performance and lower latencies compared to legacy SAS and SATA protocols. This not only accelerates existing applications that require high performance, but it also enables new applications and capabilities for real-time workload processing in the data center and at the Edge.
AI and deep learning systems are generally GPU and RAM intensive processes requiring millions of transactions per second. When the user or application requires saving this data mid-process to a drive for analysis or manufacturing outputs the system can quickly become starved by the read/write system. This is where NVMe drives can help relinquish the bottleneck being placed on data storage. As the machine learning models used within industry moves from the cloud to the edge, and as we see an exponential rise in the amount of data being stored, it will be increasingly more important to eliminate the critical path of the read/write system. NVMe’s unique features help to avoid the bottlenecks from traditional scale-up database applications to emerging Edge computing architectures, and allow for scale-up to meet new data demands.
Stay tuned to as we provide data backed examples on the impact of NVMe on an advanced analytics machine learning application.
By: Stacey Simmons on 4/1/2021 3:02:36 PM
AI Manufacturing Solutions is glad to announce that we have partnered with some of the best wholesalers and distributors around in order to keep our lead times short where our competitors may be suffering. Our Analytics and AI PC’s have maintained a lead time of 3-5 weeks ARO with expedited build and shipping available for small fees. At AI Manufacturing Solutions we believe our customers competitive advantage should not be slowed down by a 10+ week delay in computer offerings. We expect to be able to maintain this lead time over the course of immediate future and will update you if anything changes. Reach out to us via our contacts page or hardware pages to get a quote today.
By: Renee Burrows on 3/31/2021 6:03:40 PM
At AI Manufacturing Solutions we recently had a customer come to us with the request of building a custom solution to better plot their process engineering data. Both parties had NDAs but the customer was extra sensitive with their process data (rightfully so!). After much discussion, it was decided that the equities industry may be a good representation of underlying process variation for machine learning algorithm development. Our software and analytics teams started analyzing data inside Python and quickly converged on utilizing a custom neural network. Two weeks later and the team settled on a sequential multi-layer neural net with both deep layers and Long Short Term Memory (LSTM) layers to predict process variation based on historic data, example below. The neural net model runs real-time on one of our Custom AI PCs and automatically retrains every 12 hours! Our software and analytics team still jokes around to this day that it would have been a bigger win if the prediction algorithm worked 2 days out!
By: Felicia Franks on 3/28/2021 8:19:45 AM
Setting the PC name is an amazingly simple yet important step of PC setup, for both home users and for professionals delivering custom artificial intelligence and machine learning systems like AI Manufacturing Solutions. We typically configure the PC name to an agreed upon standard with our customers which is dependent on number of machine learning PC’s in a deployed architecture as well as the customer requirements, but really a PC name can be configured to anything the user would like.
Configuring the PC name is an extremely easy process:
By: Alan Wright on 3/1/2021 2:01:16 PM
AI Manufacturing Solutions is like other cutting-edge businesses in the data analytics, deep learning, and industrial IOT space in that we need to leverage tools that work effectively across a multi-PC architecture. At AI Manufacturing Solutions, it is typical for us to network into servers running 50+ deep learning GPU’s or similar edge computing technology. Z-zip is one of the tools we leverage frequently to compress artificial intelligence models and scale deployments across our industrial and manufacturing solutions. Please verify below or directly with 7-Zip on usage rights.
Referencing the Z-zip website.
7-Zip is free software with open source. Most of the code is under the GNU LGPL license. Some parts of the code are under the BSD 3-clause License. Also, there is unRAR license restriction for some parts of the code. Read 7-Zip License information.
The below steps are a quick overview on how we setup 7-Zip for our customers manufacturing IOT space.
7-Zip’s website has an excellent FAQs guide that can answer remaining questions in detail.
By: Renee Burrows on 2/19/2021 1:25:00 PM
We recently had a customer call in reference to a competitor’s system – they were suffering with slow performance that did not meet marketed specifications. They reached out to AI Manufacturing Solutions troubleshooting lines as part of their premium service plan offering. Our technicians walked them through a detailed troubleshooting process until they found that the supplier did not update drivers before shipment. We explained to the customer that the performance hits could easily be related to the lack of driver upgrades, and rightfully so, the customer asked if we had proof as they did not see any issues in Windows device manager. We decided to capture the below screenshots during our next build.
As you can see from the first figure, the overall PC ranking utilizing PASSMARK software was meager at best with an overall ranking of 64th percentile. The next image then shows the same PC hitting 98th percentile simply after a Windows driver update! This is why we urge IT groups to work with our experts, or the machine learning experts internal to their companies, to find an optimal deployment of critical hardware upgrades that positively impact performance.
By: Renee Burrows on 2/12/2021 1:10:00 PM
By: Alan Wright on 1/19/2021 2:30:00 PM
We believe that excellence in product delivery can only be achieved if each part of the process is standardized, optimized, and documented. To achieve this, each of our data analytics and machine learning systems is assembled following stringent internal procedures and proven quality control techniques.
The figure below includes an excerpt from one of our checklists used in initial assembly stages via our technicians. Each part of the process has a similar checklist where we document critical details to ensure system quality and traceability required for any future troubleshooting/upgrades. This checklist accomplishes a few things:
At AI Manufacturing Solutions, we ensure that quality is part of every process we do because without quality, then the artificial intelligence algorithms our customers use may be worth less.
By: Alan Wright on 1/1/2021 3:13:00 PM
We are excited to share that AI Manufacturing Solutions is entering the blogging world to help better position ourselves and our customers in the digital age. We will be leveraging our blog to share several interesting items:
We hope everyone finds our blog to be an educational reference to help better their industrial automation needs.
Thanks
AI Manufacturing Solutions Staff
By: Renee Burrows on 12/21/2020 1:15:00 PM
TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
Visit tensorflow.org to see what’s new, or reach out to us via our contact page to understand how our team of software experts can help elevate your TensorFlow performance.