Blog
Quantum Computing
Quantum computers can perform complex calculations and analyses many times faster and more efficiently than the traditional computers we work with every day.
Knowledge and expertise are the core values of CIMSOLUTIONS. Focused on the quality of its consultancy services, CIMSOLUTIONS continuously invests in the development of its own employees in the form of thematic growth paths, Special Interest Groups and Competence Centers. The latter focus on two aspects: building internal knowledge and expertise and at the same time developing viable and demonstrable products for external customers – both are necessary ingredients for asset-based consulting [1]. In the field of data and artificial intelligence (AI), the ‘Competence Center AI, Machine Learning and Robotics’ focuses on the innovative developments of data-driven applications and the use of artificial intelligence; this includes a number of in-house projects, one of which can be seen at the bottom right of this page at the green button.
Developing AI systems is slightly different from traditional software. In traditional programming, explicit rules are set down in the code. These fixed rules together with the input of new data ensure that a simple output is always produced by the system. In contrast, a different approach is needed to let AI learn by itself. Instead of the fixed rules (which we often do not even know ourselves), we give a number of known answers so that the machine can formulate the rules itself based on these answers and related input. Once AI has determined the rules, the system can determine an answer for new input itself.
Figure 1 – Functional difference between traditional programming and AI system
AI is a broad concept and can be divided into two parts (Figure 2):
The latter is becoming increasingly popular today with growing amounts of data and computer processing power, as it can solve much more complex problems such as image, speech or text recognition. This trend is also reflected in the market – Gartner has 7,777 references for the term “deep learning”, much more than for “machine learning” (5,639) and “AI” (4,457) [3].
Figure 2 – the scope of AI / Machine Learning / Deep Learning [2]
Like traditional software, the neural network works with digital numerical input (numbers). For this reason, this system cannot learn directly from text, audio or video input and must therefore first convert this information into numbers. The ways in which this input is converted varies depending on the type of problem:
Training deep learning models on huge datasets can be very computationally intensive, requiring multiple learning iterations that can take many days, even on powerful Graphics Processing Units (GPUs). Therefore, it is often a good idea to first find pre-existing models that have been trained on similar data and then use these to tweak the model parameters so that only a few specific iterations can be used to train the model on the specific data.
Figure 3 – Concept of transfer learning [4]
The main goal of training AI systems is to eventually deploy these systems into production so that they can make predictions based on new data. Several additional components such as data management, model serving and monitoring are required for the self-predicting system to work robustly. In this context, a discipline called Machine Learning Operations (MLOps, or DevOps for Machine Learning [5]) has recently emerged that specifically focuses on making the end-to-end lifecycle of predictive model development in every part of it robust, reproducible and manageable.
Figure 4 – Iterative-incremental process in MLOps [5]
Following these principles ensures stable predictions and timely signaling when the model needs to be retrained (for example, if input data or business processes change).
Klim Mikhailov
Machine Learning Engineer
[1] https://www.gartner.com/en/documents/3990228
[3] Gartner search results overview (up to the publication date of this blog post)