Solving the Job-Shop Scheduling Problem with Reinforcement Learning
dc.accessRights | Anonymous | * |
dc.audience | Praxis | en_US |
dc.contributor.author | Schlebusch, David | |
dc.contributor.editor | Siegenthaler, Roger | |
dc.contributor.mentor | Waldburger, Raoul | |
dc.contributor.partner | Innosuisse | en_US |
dc.date.accessioned | 2020-10-16T06:15:14Z | |
dc.date.available | 2020-10-16T06:15:14Z | |
dc.date.issued | 2020-09-01 | |
dc.description.abstract | This study explores the research done into solving the job-shop scheduling problem with linear optimization and reinforcement learning methods. It looks at a timeline of the problem and how methods to solve it have changed over time. The research should give an understanding of the problem and explore possible solutions. For that, an extensive search for papers was done on Scopus, a research paper database. 27 promising papers were selected, rated, and categorized to facilitate a sound understanding of the problem and define further research fields. Two such research fields were further elaborated; Firstly, little research has been done on how reinforcement learning can be improved by implementing data or process mining strategies to further improve accuracy. Secondly, no research was found yet connecting reinforcement learning with a takt schedule. The gathered papers give an extensive overview of the problem and demonstrate a multitude of solutions to the job-shop scheduling problem, which are discussed in detail in the results of this report. | en_US |
dc.identifier.uri | https://irf.fhnw.ch/handle/11654/31680 | |
dc.identifier.uri | https://doi.org/10.26041/fhnw-3437 | |
dc.language.iso | en | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | en_US |
dc.subject | Job-Shop Scheduling Problems | en_US |
dc.subject | JSSP | en_US |
dc.subject | Reinforcement Learning | en_US |
dc.subject | takt | en_US |
dc.subject | production planning & scheduling | en_US |
dc.subject | PPS | en_US |
dc.title | Solving the Job-Shop Scheduling Problem with Reinforcement Learning | en_US |
dc.type | 11 - Studentische Arbeit | * |
dspace.entity.type | Publication | |
fhnw.InventedHere | Yes | en_US |
fhnw.IsStudentsWork | yes | en_US |
fhnw.PublishedSwitzerland | No | en_US |
fhnw.ReviewType | No peer review | en_US |
fhnw.StudentsWorkType | Master | en_US |
fhnw.affiliation.hochschule | Hochschule für Technik | de_CH |
fhnw.affiliation.institut | Institut für Business Engineering | de_CH |
fhnw.initialPosition | This literature research is done to answer a problem created by the novel solution to the job-shop scheduling problem (JSSP) with a takt proposed by Walburger as a smart scheduling recommender system (SRS). SRS has the goal to reduce the makespan of jobs by introducing a takt so that each step of the jobs can be done in one shift (time-unit) and the next step of the job in the following shift and so on. This should reduce the makespan of the job to exactly the number of steps in shifts, which simplifies planning and helps keep the shop-floor footprint low since no large temporary stores should be needed. Furthermore, it should also guarantee delivery on time, since the makespan per product is now fixed to a certain number of shifts. This approach prompts the question of how job-shop scheduling is solved at the moment and what research has been done previously, in particular with a focus on a production takt. | en_US |
fhnw.lead | This literature research explores the research done into solving the job-shop scheduling problem with linear optimization and reinforcement learning methods. It looks at a timeline of the problem and how methods to solve it have changed over time. The research should give an understanding of the problem and explore possible solutions. For that, an extensive search for papers was done on Scopus, a research paper database. 27 promising papers were selected, rated, and categorized to facilitate a quick understanding of the problem and show potential research gaps. Two such gaps were found; Firstly, little research has been done on how reinforcement learning can be improved by implementing data or process mining strategies to further improve accuracy. Secondly, no research was found connecting reinforcement learning with a takt schedule, like the one proposed by the SRS project. The gathered papers give an extensive overview of the problem and demonstrate a multitude of solutions to the job-shop scheduling problem, which are discussed in detail in the results of this report. This should provide all the necessary information to be able to implement one’s own version of reinforcement learning for the job-shop scheduling problem. | en_US |
fhnw.procedure | The literature research yields a large number of papers trying to solve the JSSP in various ways. The found research indicates that the future of solving the JSSP is heading into the direction of self-learning agents like DQNs. The need for RL was already discussed in 1994 [27]. And as of today, a complete guide to how to design, implement, and validate a DQN model for the JSSP [33], is given over diverse papers. Open remains the question of how deep Q-Learning can be applied to other job-shop management styles, in particular the one of a takt, where no research was found, referring to the results shown in Table 1. | en_US |
fhnw.publicationState | Unpublished | en_US |
fhnw.results | The JSSP is a highly researched topic with a lot of improvements happening over the years. Starting from a simple branch and bound algorithms to heuristics like GA and PSO, with the current state of research focusing strongly on RL in particular DQNs. Finding a niche to further improve and extend existing methods is difficult. The results of the literature show that so far no-one has touched the subject of RL in a takt production environment, like the one proposed by the SRS project, which should be further explored. From the solutions other researchers proposed, applying a DQN to the SRS project seems the most reasonable. | en_US |
relation.isAuthorOfPublication | 87aa3927-3516-4dbc-993e-9844f977042b | |
relation.isAuthorOfPublication.latestForDiscovery | 87aa3927-3516-4dbc-993e-9844f977042b | |
relation.isEditorOfPublication | 05c09e6c-1338-417c-9dd5-9089f70d15fe | |
relation.isEditorOfPublication.latestForDiscovery | 05c09e6c-1338-417c-9dd5-9089f70d15fe | |
relation.isMentorOfPublication | 27ad9db5-9c7a-4864-8084-c2a71e3d635d | |
relation.isMentorOfPublication.latestForDiscovery | 27ad9db5-9c7a-4864-8084-c2a71e3d635d |
Dateien
Originalbündel
1 - 1 von 1
Lade...
- Name:
- P7b-Schlebusch_David-Solving_the_Job-Shop_Scheduling_Problem_with_Reinforcement_Learning.pdf
- Größe:
- 990.95 KB
- Format:
- Adobe Portable Document Format
- Beschreibung: