Available online at www.sciencedirect.com
ScienceDirect Available online atatwww.sciencedirect.com ScienceDirect Procedia Manufacturing 00 (2018) 000–000 Available online www.sciencedirect.com
Available online at www.sciencedirect.com
ScienceDirect ScienceDirect
Procedia Manufacturing 00 (2018) 000–000
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
Procedia Manufacturing 24 (2018) 120–127 Procedia Manufacturing 00 (2017) 000–000
www.elsevier.com/locate/procedia 4th International Conference on System-Integrated Intelligence
4th International Conference on System-Integrated Intelligence Enabling Data Analytics in Large Scale Manufacturing Enabling Data Analytics Large Scale Manufacturing a a Achim Kampker , Heiner Heimesa,in Ulrich Bührer *, Christoph Lienemanna,
Manufacturing Engineering Society International Conference 2017, MESIC 2017, 28-30 June Stefan Krotilb Spain a a(Pontevedra), a 2017, Vigo Achim Kampker , Heiner Heimes , Ulrich Bührer *, Christoph Lienemanna, b (PEM) of RWTH Aachen University Chair of Production Engineering of E-mobility Components Campus Boulevard 30, 52074 Aachen, Germany b a Group, of Knorrstraße 80788 Munich, Chair of ProductionBMW Engineering E-mobility147, Components (PEM) Germany of RWTH Aachen University Campus Boulevard 30, 52074 Aachen, Germany b BMW Group, Knorrstraße 147, 80788 Munich, Germany a
Stefan Krotil
Costing models for capacity optimization in Industry 4.0: Trade-off between used capacity and operational efficiency Abstract
A. Santanaa, P. Afonsoa,*, A. Zaninb, R. Wernkeb
a face increasing process complexity. To remain competitive, increasing the knowledge Abstract Companies of the manufacturing industry University of Minho, 4800-058 Guimarães, Portugal b Unochapecó, 89809-000 Chapecó, SC, Brazil concerning innovative manufacturing processes is necessary. In other areas, data analytics methods have been successfully applied Companies of theCurrently, manufacturing industry face increasing complexity. To remain competitive, increasing the knowledge for this purpose. their application in large scale process manufacturing is hampered by insufficient data availability. Therefore, concerning innovative manufacturing processes is necessary. otheravailability areas, data by analytics methods have been successfully applied this study presents a solution approach that enables adaptiveIndata establishing a data-use-case-matrix (DUCM), for thisallows purpose. their application in large scale manufacturing is hampered by insufficient data availability. Therefore, which useCurrently, case prioritization to support dimensioning of control systems and IT infrastructures. In order to support Abstract this study presents a solution approach thatisenables adaptive data availability by establishing a data-use-case-matrix (DUCM), technology development, further proposed a scalable implementation of the prioritized use cases starting in early prototyping which phases.allows use case prioritization to support dimensioning of control systems and IT infrastructures. In order to support Under the development, concept of further "Industry 4.0",isproduction processes will be prioritized pushed tousebecases increasingly interconnected, technology proposed a scalable implementation of the starting in early prototyping information based on a real time basis and, phases. © 2018 The Authors. Published by Elsevier Ltd. necessarily, much more efficient. In this context, capacity optimization goes traditional aim of maximization, also for organization’s profitability and) value. https://creativecommons.org/licenses/by-nc-nd/4.0/ This isbeyond an openthe access article under thecapacity CC BY-NC-ND license (contributing © 2018 The Authors. Published by Elsevier B.V. © 2018 The Published by Elsevier Ltd. Selection andAuthors. peer-review underand responsibility of theimprovement scientific committee of the 4th International Conference on System-Integrated Indeed, lean management continuous approaches suggest capacity optimization instead of This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND license ( https://creativecommons.org/licenses/by-nc-nd/4.0/ ) Intelligence. maximization. The study of capacity optimization and costing models is an important research topic that deserves Peer-review under responsibility of the scientific committee of the 4th International Conference on System-Integrated Intelligence. Selection and peer-review responsibility of the scientific committee of the 4th International on System-Integrated contributions from bothunder the practical and theoretical perspectives. This paper presents Conference and discusses a mathematical Keywords: Manufacturing; Data Analytics; Big Data; Optimization Intelligence. model forAutomotive; capacity management based on different costing models (ABC and TDABC). A generic model has been
developed and it was used to analyze idle capacity and to design strategies towards the maximization of organization’s Keywords: Automotive; Manufacturing; Data Analytics; Big Data; Optimization value. The trade-off capacity maximization vs operational efficiency is highlighted and it is shown that capacity optimization might hide operational inefficiency. © 2017 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the scientific committee of the Manufacturing Engineering Society International Conference 2017. Keywords: Cost Models; ABC; TDABC; Capacity Management; Idle Capacity; Operational Efficiency
*1.Corresponding author. Tel.: +49-151-601-18280; fax: +49-89-382-70-10021. Introduction E-mail address:
[email protected] * The Corresponding author. Tel.: +49-151-601-18280; +49-89-382-70-10021. cost of idle capacity is a fundamentalfax: information for companies and their management of extreme importance E-mail address:
[email protected] 2351-9789 © 2018 The Authors. Published by Elsevier Ltd. in modern production systems. In general, it is defined as unused capacity or production potential and can be measured This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) in several ways: tonsunder of production, available hours of manufacturing, etc. The management of the idle capacity Selection peer-review responsibility of the scientific 2351-9789and © 2018 The Authors. Published by Elsevier Ltd. committee of the 4th International Conference on System-Integrated Intelligence. * Paulo Afonso. Tel.: +351 253 510 761; fax: +351 253 604 This is an open access article under the CC BY-NC-ND license741 (https://creativecommons.org/licenses/by-nc-nd/4.0/) E-mailand address:
[email protected] Selection peer-review under responsibility of the scientific committee of the 4th International Conference on System-Integrated Intelligence. 2351-9789 © 2017 The Authors. Published by Elsevier B.V. Peer-review under of the scientificbycommittee the Manufacturing Engineering Society International Conference 2017. 2351-9789 © 2018responsibility The Authors. Published Elsevier of B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the 4th International Conference on System-Integrated Intelligence. 10.1016/j.promfg.2018.06.017
2
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Achim Kampker / Procedia Manufacturing 00 (2018) 000–000
121
1. Introduction Due to ever-changing market conditions and regulations along with increasing competition, companies of the manufacturing industry are facing challenges to satisfy the high customer demands as well as coping with the increasing complexity of the manufacturing processes [1–3]. In order to face these challenges and to produce modern components with differentiating potential, high manufacturing quality and efficiency are required [4]. The current industrial practice to ensure premium product quality as well as safety of components, machines and products despite novel manufacturing processes is to conduct a large number of quality checks during manufacturing. These quality checks bind large amounts of plant investment and reduce manufacturing efficiency. Increasing knowledge about the manufacturing processes and their interdependencies will allow reduction regarding the number of quality checks by prediction of manufacturing quality utilizing big data. Therefore, knowledge generation for the modern manufacturing technologies concerning manufacturing stability, efficiency and product quality needs to be supported and sped up in order to stay competitive. Currently, finding potential optimization regarding production efficiency and quality requires documented data, a lot of experience and manual diagnostics. An emerging trend to uncover hidden relations is data analytics [5]. Collecting data, using Internet-of-Things approaches [6] and applying machine-learning or data mining approaches [5, 7, 8] by following methods such as CRISP-DM [9] are typical steps in the process of uncovering hidden relations within the ITindustry. In today’s production environments automated data acquisition is difficult due to heterogeneous databases, limited database access, missing tracing information and reduced information sets due to missing time series data as well as high costs for new IT-infrastructure, cp. [10]. This hinders the application of data analytics methods. As existing protocols on the control level, e.g. OLE for Process Control (OPC), ModbusTCP, Dynamic Data Exchange or MTConnect, need to be statically configured during commissioning, they must be manually adjusted to allow adaptive data acquisition, cp. [11]. OPC UA provides limited discovery functionality [11], but is not present in all existing factories. Therefore, due to the required manual effort and hardware, retrofitting existing manufacturing facilities to retroactively collect additional data is only applicable for small scale applications within the limitations of the existing IT-infrastructure, thereby, further hindering adaptation of data analytics. To enable adaptation of data analytics within production, proper data quality regarding a specific use case as well as data availability is essential, cp. [2, 12] for an example of data analytics in order processing. Furthermore, due to the synergetic nature and uncertainty as well as scalability of data analytics projects, the current project prioritizations, based on selection of separate technological developments, have to be adjusted to handle data analytics projects. As a result, data analytics approaches have to be considered during early technological development phases. Therefore, the presented approach focuses on three central ideas i) adaptive data availability, ii) strategic prioritization as well as iii) scalable data analytics, which are detailed in the following. 2. State of the Art Today’s field of research regarding data analytics relates to a broad spectrum of topics. These are e.g. trends regarding Industrie 4.0, the Internet of Things as well as Data Mining. A quick overview of the most relevant topics for data analytics in manufacturing, which are a basis for this study, is given in the following. 2.1. Data Analytics for Uncovering Hidden Relations Topics concerning data analytics often relate to the concepts of Internet of Things, wherein objects are interconnected, allowing their management and data mining of the created data [13]. The closely related Industrie 4.0 comprises the application of the generic concept of cyber physical systems (CPS) [14, 15]. The fields of data mining, data analytics as well as machine learning (ML) [5] are necessary to extract knowledge from large amounts of data. Learning relations between production parameters and the process or the product is also a part of quality management [16]. CRISP-DM [9] is a well-established procedure model for data analytics projects. This procedure model focuses on the stages of Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation and Deployment. In order to enable data analytics, harvestable data is required [17]. The rise of IoT-capable machines aims at having increasingly more data available. Nevertheless, the IT-infrastructures required to process big data are still subject to research. Therefore, deriving knowledge from big data is considered a great challenge [5, 6]. While integrating CPS,
122
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Author name / Procedia Manufacturing 00 (2018) 000–000
3
handling data heterogeneity remains a current research topic [7, 8, 18]. These approaches focus on understanding data from different data sources and aims at uncovering unknown dependencies. Big data and data analytics applications often focus on autonomous driving, the optimization of traffic and the development of novel business strategies [19, 20]. Current research focuses on finding novel pricing and business models [21] as well as accelerating information exchange mechanisms within the industrial internet of things [22]. 2.2. Data Analytics in Manufacturing The main identified use case categories for big data and analytics in production are quality control, e.g. predictive maintenance and smart sensors [23, 24], as well as reduction of test-timing and calibration along with warranty costs and improving yield [25, 26]. Other use cases are human-machine-interfaces (HMI), such as text and speech recognition or learning of processes [27]. These use cases are typically considered separately. Current data analytics procedure models start with the business and data understanding phases [9]. Even though there exist extensions to data analytics procedure models, such as the stream analytics [28], adaptive data availability is often neglected or reduced to establishing a connection to the required data sources, disregarding limitations of the surrounding IT-infrastructure, cp. [18]. Furthermore, acquiring all generated data is infeasible due to financial and complexity reasons. Because of the limitations regarding data acquisition from production systems, it is still considered a novel resource for data analytics in manufacturing [17, 29]. Regarding strategic prioritization, an optimization problem has to be solved, in order to evaluate a number of use cases according to their cost and benefits. Typical optimization problems in manufacturing are process optimizations [30–33]. Therein, the production process is optimized based on process step alternatives. This is a highly constrained and combinatorial optimization problem, since process steps or hierarchies of process steps are interchanged. In data analytics processes, each use case that is implemented influences the costs of other possible use cases, making them more beneficial, thereby, providing a larger solution space. These synergies arise from reusability of the same data sources as well as toolchains and analytics approaches for different use cases. The benefits have to fit the given production process and may contain different metrics, e.g. benefit per produced unit or profit per hour, cp. [3]. Scalable data analytics focuses on continuous analytics implementation during early phases of the product and process development as well as implementing them in series production. Hence, prototype data has to be considered. Currently, research on knowledge generation in early prototype stages focuses on detecting implicit knowledge [34] and enabling improved teaching methods instead of a continuous integration of data analytics. Therefore, data analytics are typically applied retroactively, whenever a problem occurs. 2.3. Conclusion of the State of the Art Integrating manufacturing data is expected to revolutionize the industry by means of data analytics, which enables identification of hidden knowledge and provides intelligence throughout the engineering processes [25]. The procedural models required to perform analytics projects based on existing data as well as methods for processing large amounts of data already exist. Since not all data is readily available in large scale manufacturing and it is not feasible to collect all data from production, due to costs and limitations of the control and IT-infrastructures, an approach that focuses on the challenges arising from the multidisciplinary nature of large scale digitalization projects in the manufacturing industry is missing. In order to enable data analytics within manufacturing, data acquisition must be considered and, therefore, necessary use cases have to be identified and prioritized proactively. Current project prioritizations have to be adjusted to handle uncertainty and scalability of data analytics projects. This in turn enables decision making based on use case benefits, thereby, guaranteeing the adaptive acquisition of beneficial data. To enable this adaptive data acquisition, the selected use cases and their data requirements have to be incorporated during the planning of the control infrastructures of the manufacturing plants. In order to reduce ramp up times and to provide support during early development phases, data analytics approaches have to be continuously applied to early prototype production phases. This allows early detection of actual use case benefits as well as more detailed use case requirements, e.g. reduced timing requirements for a specific use case.
4
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Achim Kampker / Procedia Manufacturing 00 (2018) 000–000
123
3. Enabling Data Analytics in Manufacturing In the following, a concept to enable data analytics from the early phase to series manufacturing is presented. It is structured in the central ideas ‘Ensure Adaptive Data Availability’, ‘Strategic Prioritization’ and ‘Implement Scalable Data Analytics’. The learning loop is closed to allow evaluation of the approach and process optimization (cp. Fig. 1a). 3.1. Ensuring Adaptive Data Availability The main goal of ensuring adaptive data availability is to identify, which data is relevant to be made accessible. Mainly, this data comprises parameters that influence or describe the product or production process, in order to enable data analytics use cases. Furthermore, to allow the dimensioning of the IT-infrastructure in the production environment, the necessary data quality has to be defined for each data point regarding its data type and size as well as use case specific requirements, such as the necessary data recording frequency to describe a given process. Additionally, the required bandwidth to transfer this data has to be established, thereby taking into account data aggregation and calculations performed on a decentralized node, edge device or the programmable logic controller to reduce data traffic. In order to obtain the relevant information, structured workshops with experts from all relevant disciplines are conducted. These experts include production and process planners, IT and control engineers, product developers and production technology experts as well as quality specialists. The workshop concept is based on methods stemming from the quality management, e.g. Failure Mode and Effects Analysis (FMEA) [35] and Six Sigma [16]. The approach of these workshops is twofold. The first one is a deductive approach, which utilizes the current state of the art and company strategy to derive important use-case categories and is performed by the data scientist team, relying on planning and strategic knowledge. The second is an inductive approach utilizing expert knowledge, which employs existing experience with current diagnostic approaches, e.g. to aid in solving technological challenges during prototypical production stages. The experts are asked to prepare an overview of typical problems they are aware of and of data sources they rely on. In a second step, workshops with each identified expert are conducted. Following a prepared interview guideline, information about possible data analytics use cases, possible interfaces to other experts are determined as well as the required data and data quality. The selection of necessary sensor systems and other CPS is presented in [12]. This approach allows to define relevant use cases and to link them to the data relevant for these use cases, while specifying the required data quality. This information is collected within the data use case matrix (DUCM), cp. Fig. 1b. In order to structure the acquired information, the DUCM additionally comprises entries regarding variable names, data types and measurement units as well as the names of the considered production process, the responsible process experts and the machines that generate the data. For the purpose of allowing comparisons of the use cases, the requirements, e.g. timing requirements, required response times and data storage time as well as calculated benefits are included in the DUCM for the use cases. This information is necessary to allow a prioritization of the use cases. An example for this method is a predictive maintenance use case considering a leak test. First, the necessary variables are determined, e.g. system pressure and power consumption. For example, sampling rate requirements might be 30 Hz in order to allow monitoring of the process. Furthermore, the results must be available within a defined time (e.g. 60 seconds), in order to avoid damages. Data must be stored for a defined time (e.g. three years) to provide enough data for continuous modeling of the process.
Fig. 1. (a) Overview on the method for enabling data analytics in manufacturing (b) Ensuring adaptive data availability
124
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Author name / Procedia Manufacturing 00 (2018) 000–000
5
For the determination of the benefits, it is possible to use ranking systems, business case calculations and technological prechecks. The ranking criteria, in the provided example e.g. financial benefits or reduction of rework, are selected based on the company’s strategy and goals. Using these criteria and combining them with expert knowledge, the benefits of each use case are determined. Collecting this information is necessary for the strategic prioritization. At this point it is not possible to calculate the required effort or costs, since different implementation amounts of these use cases as well as further expansion stages impede in determining the required costs for each use case. 3.2. Strategic Prioritization In order to allow prioritization of the previously defined use cases, their potential regarding possible benefits has to be evaluated. For this purpose, it is necessary to determine the cost to implement each use case and their benefits, as presented in the previous section. The central ideas of this approach are depicted in Fig. 2a. A main differentiation from other effort and benefit evaluations are the possible scaling effects present within the data sources, which multiple use cases can share. Therefore, the use cases are not only clustered based on their benefits and required implementation effort, but also according to their overlapping required data points. Consequently, multiple use cases can share a number of data points, which in turn reduces the overall cost of implementing these use cases, making them more beneficial. Determining the costs for each use case depends on initial costs, e.g. for building the IT infrastructure and executing the data analysis, and also running costs for data storage and maintaining the IT systems. These comprise data storage costs, required data storage access speeds, costs for computation in the cloud and on site as well as on the required hardware and software tools to extract data from the production machines. The latter is prone to fluctuation since a high number of prioritized use cases might make additional computing nodes within the production network, additional Ethernet connectivity or even parallel bus systems necessary. Thus, it is necessary to determine the best combination of use cases, which provides the best benefit to cost ratio. The given combinatory problem aims at selecting the optimum combinations of k use cases from a quantity of n total use cases. The goal is to identify optimal combinations of one element to an nth element quantity. Calculating the total of possible combinations and applying the binomial theorem, this problem can be simplified to 𝑛𝑛
𝑛𝑛 ( ) 1𝑛𝑛−𝑘𝑘 1𝑘𝑘 = (1 + 1)𝑛𝑛 − 1 = 2𝑛𝑛 − 1. (1) 𝑘𝑘=1 𝑘𝑘
A=∑
Consequently, large use case numbers necessitate the application of optimization algorithms, which will be the focus of future studies. This allows prioritization of the optimal use case combination. 3.3. Scalable Data Analytics The goal of scalable data analytics is to provide benefits during early phases of development, regarding causes and effects as well as diagnostic support, while allowing continuous integration into large scale series production. For this approach, the use cases prioritized in the last subsection are implemented as early as prototype production stages. This approach is depicted in Fig. 2b. Following the prototypical implementation, a first pilot application is implemented in current series production, followed by a roll out to more than one production facility.
Fig. 2. (a) Strategic Prioritization (b) Scalable Data Analytics
6
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Achim Kampker / Procedia Manufacturing 00 (2018) 000–000
125
The two main factors to enable this kind of scalability are use case preparation and data infrastructure. In this context, use case preparation stands for implementing use cases in prototypical stages of production, even if they are not able to give statistically relevant results. This approach provides two benefits: 1. Depending on the use case, its results or its data may be applicable for diagnostic purposes, aiding in failure discovery as well as taking advantage of the available potentials in early prototyping stages. 2. Use case preparation in early phases leads to an early development and implementation of the data analytics toolchain. Therefore, scaling from pilot to series is accelerated as required interfaces and algorithms are set up. The factor data infrastructure to enable scalability focuses on the data processing chain, its interfaces and the modeling solutions. These are implemented according to series production standards during the prototyping phase. This allows early implementation of the computation and storage parts of the use cases, such as data storage, algorithms, statistical models as well as visualizations. 3.4. Continuous Evaluation In order to enable a continuous improvement process, the implementations of the use cases described within the DUCM are evaluated within each production phase. This is necessary because of not yet fully specified product properties resulting in frequent changes to the product or production system during prototyping stages. Additionally, all knowledge created supports the development of the next iteration of a product. The main approach of the evaluation during each production phases is to determine the usefulness of each use case or data source, in order to detect if new use cases or data sources need to be added or removed. For this purpose, the benefit of the implementations of the existing use cases are compared to the expected benefits and adapted if necessary. Furthermore, new production challenges are introduced as use cases within the DUCM. Hence, this is an iterative process, which continuously optimizes the DUCM, thereby, making it more precise. 4. Application and Evaluation The application example is taken from the automotive industry, which is shifting towards electric mobility [36]. As a result of the production cost difference to traditional vehicles [37], competing companies as well as other in-house products make up the competition. Since the manufacturing of high voltage batteries for electric vehicles is a cost intensive process, it is necessary to detect optimization potentials regarding production quality and efficiency. Hence, this is the application example. The manufacturing steps can be clustered into a receiving inspection of the delivered battery cells, module manufacturing and testing, high voltage battery (HVB) manufacturing as well as end of line (EoL) testing. This approach focuses on the production of the next generation HVB, while the results are also applied to the existing production. 4.1. Implementation of the Approach and Evaluation In order to detect beneficial data analytics use cases, interviews and process FMEAs were conducted with the process experts, regarding the manufacturing processes. In these workshops, numerous potential use cases were identified, which allowed construction of the DUCM with a total of 264 use cases. The chosen requirements were bandwidth, response times and storage duration. In the present case, the anticipated combined benefit concerning financial examination, rework reduction as well as the use case’s influence on scalability is combined within an overall benefit value. The definition of the requirements and the benefits of each use case were based on expert knowledge. The data costs were obtained from the combination of use case requirements and the data types as well as typical cost factors for IT projects, infrastructures and data storage. To make the costs comparable, they were derived from the overall cost of implementing all use cases. This allowed to calculate specific costs for each use case, while keeping scaling effects due to data reusability in mind. In order to allow a strategic prioritization, business cases for the use cases were calculated to get a clearer understanding of the costs and benefits involved. Production of batteries is characterized by high testing efforts to ensure product quality. Hence, the HVB EoL stood out during the use case prioritization and the benefit for its use cases was very high in the business case calculations. The use case with the highest prioritization was decision support for more efficient and faster HVB EoL testing. This use case was selected for implementation.
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Author name / Procedia Manufacturing 00 (2018) 000–000
126
7
Prototype data was utilized in order to build the data analytics toolchain. This allowed acquisition of the required data to build cause and relationship models based on decision trees. This analytics toolchain was then applied to production data. Using the built analytics toolchain, correlations between testing parameters and negative testing results were identified. Using this approach it was possible to identify test patterns with a high influence on negative test results. These findings aided in the decision making process to rearrange and optimize the testing order. Based on a current pilot implementation, this optimized testing sequence has proven to be able to provide faster testing results in negative test cases and to reduce the number of detected pseudo errors. Hence, it was possible to increase manufacturing output by reducing rework as well as testing times and efforts. 5. Conclusion and Outlook In this paper, an approach on enabling data analytics in manufacturing is presented. Currently, the main obstacles for data analytics in electric automotive manufacturing are the missing data and limited accessibility of existing data due to the distribution of production databases. In order to enable data analytics, acquiring all possible data is not an option from a financial perspective. Retroactively collecting data is not an option, due to the time lost, until a statistically relevant amount of data is collected. Therefore, the presented approach focuses on obtaining adaptive data availability by proactively giving transparency to possible data analytics use cases for a specific production process. This allows a prioritization of the use cases, in order to achieve a maximum benefit. Furthermore, this method allows the implementation of these use cases in early phases of prototype development, thereby, supporting diagnostics, scaling processes and accelerating knowledge generation. Another benefit of this approach is that main parts of the built data analytics environment are applicable throughout the production development process, leading to large scale series production. By means of an application example from the automotive manufacturing domain, a first glimpse of the possible benefits of this method were proven. Using the method for adaptive data availability, a data-use-case-matrix of the manufacturing process for high voltage batteries was constructed, which allowed a strategic prioritization of the identified use cases and, hence, the selection of the decision support use case for HVB EoL testing. Applying data analytics methods to a combination of prototype- and series production data, a learning mechanism for optimized testing orders was implemented for optimized HVB EoL testing. Future research will focus on tapping into the full potential of the identified central ideas to enable data analytics in manufacturing. Specifically, the concepts for adaptive data availability, need to be extended, in order to provide a larger knowledge base. Due to the large number of identified use cases in large scale manufacturing, strategic prioritization must be able to reliably include synergies between use cases and automatically provide recommendations based on additional conditions. Furthermore, the approach for the implementation of scalable data analytics will focus on building a comprehensive analytics framework, which then allows the implementation of multiple use cases for prototypes and pilots as well as automatized integration into series production. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
B. Vogel-Heuser, D. Schütz, T. Frank, C. Legat, Model-driven engineering of Manufacturing Automation Software Projects – A SysMLbased approach, Mechatronics, 24(7) (2014) 883–897. G. Schuh, M. Blum, Design of a data structure for the order processing as a basis for data analytics methods, 2016 Portland International Conference on Management of Engineering and Technology (PICMET), Honolulu, HI, USA, 2164–2169. M. Hammer, K. Somers, H. Karre, C. Ramsauer, Profit Per Hour as a Target Process Control Parameter for Manufacturing Systems Enabled by Big Data Analytics and Industry 4.0 Infrastructure, Procedia CIRP 63 (2017) 715–720. F. Ju, J. Li, G. Xiao, N. Huang, S. Biller, A Quality Flow Model in Battery Manufacturing Systems for Electric Vehicles, IEEE Trans. Automat. Sci. Eng. 11(1) (2014), 230–244. A. Luckow, K. Kennedy, F. Manhardt, E. Djerekarov, B. Vorster, A. Apon, Automotive big data: Applications, workloads and infrastructures: IEEE International Conference on Big Data (2015). V. Uraikul, C. W. Chan, and P. Tontiwachwuthikul, Artificial intelligence for monitoring and supervisory control of process systems, Engineering Applications of Artificial Intelligence, vol. 20, no. 2, pp. 115–131, 2007. O. Niggemann, G. Biswas, J. Kinnebrew, H. Khorasgani, S. Volgmann, A. Bunte, Datenanalyse in der intelligenten Fabrik, Handbuch Industrie 4.0 2, (2017) 471–490. L.-A. Tang, J. Han, G. Jiang, Mining sensor data in cyber-physical systems, Tsinghua Science and Technology 19(3) (2014) 225–234. Colin Shearer, The CRISP-DM Model: The New Blueprint for Data Mining, Journal of Data Warehousing 5(4) (2000).
8
[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37]
Achim Kampker et al. / Procedia Manufacturing 24 (2018) 120–127 Achim Kampker / Procedia Manufacturing 00 (2018) 000–000
127
T. H.-J. Uhlemann, C. Lehmann, R. Steinhilper, The Digital Twin: Realizing the Cyber-Physical Production System for Industry 4.0, Procedia CIRP 61 (2017) 335–340. J. Schlechtendahl, M. Keinert, F. Kretschmer, A. Lechler, A. Verl, Making existing production systems Industry 4.0-ready, Prod. Eng. Res. Devel. 9(1) (2015) 143–148. G. Schuh, C. Maasem, M. Birkmeier, Systematization models for taylor-made sensor system applications and sensor data fit in production, Smart SysTech 2015 European Conference on Smart Objects, Systems, and Technologies 259 (2015). B. Dorsemaine, J.-P. Gaulier, J.-P. Wary, N. Kheir, P. Urien, Internet of Things: A Definition & Taxonomy, in 9th International Conference on Next Generation Mobile Applications, Services and Technologies (NGMAST) (2015) 72–77. R. Drath, A. Horch, Industrie 4.0: Hit or Hype, IEEE Industrial Electronics Magazine 8(2) (2014) 56–58. L. Monostori, B. Kádár, T. Bauernhansl, S. Kondoh, S. Kumara, G. Reinhart, O. Sauer, G. Schuh, W. Sihn, K. Ueda, Cyber-physical systems in manufacturing, CIRP Annals - Manufacturing Technology 65(2) (2016) 621–641. N. Ranjan Senapati, Six Sigma: Myths and realities, International Journal of Quality & Reliability Management 21(6) (2004) 683–690. J. S. K. Tan, A. K. Ang, L. Lu, S. W. Q. Gan, M. G. Corral, Quality Analytics in a Big Data Supply Chain Commodity Data Analytics for Quality Engineering, IEEE Region 10 Conference (TENCON) (2016) 3455–3463. V. Jirkovsky, M. Obitko, V. Marik, Understanding Data Heterogeneity in the Context of Cyber-Physical Systems Integration, IEEE Trans. Ind. Inf. 13(2) (2017) 660–667. R. Brunauer, Big Data in der Mobilität. Big Data (2016) 235-267. R. Dewenter, H. Lüth, Big Data aus wettbewerblicher Sicht, Wirtschaftsdienst, 96(9) (2016) 648–654. J. Weinman, The Economics and Strategy of Manufacturing and the Cloud, IEEE Cloud Computing 3(4) (2016) 6–11. J. Wan, S. Tang, Z. Shu, L. Di, S. Wang, M. Imran, A. Vasilakos, Software-Defined Industrial Internet of Things in the Context of Industry 4.0 IEEE Sensors Journal 16(20) (2016) 7373-7380. R. Bai V, A. C A, J. M. Oommen, J. Babu, T. Paul, V. Sankar, Predictive analysis for industrial maintenance automation and optimization using a smart sensor network: International Conference on Next Generation Intelligent Systems (ICNGIS), (2016) 1-5. G. A. Susto, A. Schirru, S. Pampuri, S. McLoone, A. Beghi, Machine Learning for Predictive Maintenance: A Multiple Classifier Approach, IEEE Transactions on Industrial Informatics 11(3) (2015) 812–820. P. Lade, R. Ghosh, S. Srinivasan, Manufacturing Analytics and Industrial Internet of Things, IEEE Int. Systems 32(3) (2017) 74–79. D. Park, The Quest for the Quality of Things: Can the Internet of Things deliver a promise of the quality of things? IEEE Consumer Electron. Mag. 5(2) (2016) 35–37. A. Diedrich, A. Bunte, A. Maier, O. Niggemann, Kognitive Architektur zum Konzeptlernen in technischen Systemen (2015). P. Kalgotra, R. Sharda, Progression analysis of signals: Extending CRISP-DM to stream analytics, IEEE International Conference on Big Data (2016) 2880-2885. A. Gadatsch, Big Data – Datenanalyse als Eintrittskarte in die Zukunft, Big Data für Entscheider (2017) 1–10. D. Biermann, J. Gausemeier, S. Hess, M. Petersen, T. Wagner, Planning and optimisation of manufacturing process chains for functionally graded components—part 1: Methodological foundations,” Prod. Eng. Res. Devel. 7(6) (2013) 657–664. M. Dannenberg, A. Georgiadis, B. A. Behrens, Model Based Optimization of Forging Process Chains under the Consideration of Penalty Functions, Advanced Materials Research 1018 (2014) 533–538. B. Denkena, B.-A. Behrens, F. Charlin, and M. Dannenberg, “Integrative process chain optimization using a Genetic Algorithm,” Prod. Eng. Res. Devel. 6(1) (2012) 29–37. T. Wagner, D. Biermann, A Framework for Multi-level Modeling and Optimization of Modular Hierarchical Systems, Procedia CIRP 41 (2016) 159–164. J. A. B. Erichsen, A. L. Pedersen, M. Steinert, T. Welo, Eds., Using prototypes to leverage knowledge in product development: Examples from the automotive industry, Annual IEEE Systems Conference (2016) 1-6. Lefayet Sultan Lipol, Jahirul Haq, “Risk analysis method: FMEA/FMECA in the organizations, International Journal of Basic & Applied Sciences 11(5) (2011). M. Brenna, F. Foiadelli, M. Longo, D. Zaninelli, e-Mobility Forecast for the Transnational e-Corridor Planning, IEEE Trans. Intell. Transport. Syst. 17(3) (2016) 680–689. A. Kampker, Elektromobilproduktion. Springer Berlin Heidelberg, (2014).