Solving the Job-Shop Scheduling Problem with Reinforcement Learning

dc.accessRightsAnonymous*
dc.audiencePraxisen_US
dc.contributor.authorSchlebusch, David
dc.contributor.editorSiegenthaler, Roger
dc.contributor.mentorWaldburger, Raoul
dc.contributor.partnerInnosuisseen_US
dc.date.accessioned2020-10-16T06:15:14Z
dc.date.available2020-10-16T06:15:14Z
dc.date.issued2020-09-01
dc.description.abstractThis study explores the research done into solving the job-shop scheduling problem with linear optimization and reinforcement learning methods. It looks at a timeline of the problem and how methods to solve it have changed over time. The research should give an understanding of the problem and explore possible solutions. For that, an extensive search for papers was done on Scopus, a research paper database. 27 promising papers were selected, rated, and categorized to facilitate a sound understanding of the problem and define further research fields. Two such research fields were further elaborated; Firstly, little research has been done on how reinforcement learning can be improved by implementing data or process mining strategies to further improve accuracy. Secondly, no research was found yet connecting reinforcement learning with a takt schedule. The gathered papers give an extensive overview of the problem and demonstrate a multitude of solutions to the job-shop scheduling problem, which are discussed in detail in the results of this report.en_US
dc.identifier.urihttps://irf.fhnw.ch/handle/11654/31680
dc.identifier.urihttps://doi.org/10.26041/fhnw-3437
dc.language.isoenen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/en_US
dc.subjectJob-Shop Scheduling Problemsen_US
dc.subjectJSSPen_US
dc.subjectReinforcement Learningen_US
dc.subjecttakten_US
dc.subjectproduction planning & schedulingen_US
dc.subjectPPSen_US
dc.titleSolving the Job-Shop Scheduling Problem with Reinforcement Learningen_US
dc.type11 - Studentische Arbeit*
dspace.entity.typePublication
fhnw.InventedHereYesen_US
fhnw.IsStudentsWorkyesen_US
fhnw.PublishedSwitzerlandNoen_US
fhnw.ReviewTypeNo peer reviewen_US
fhnw.StudentsWorkTypeMasteren_US
fhnw.affiliation.hochschuleHochschule für Technik und Umwelt FHNWde_CH
fhnw.affiliation.institutInstitut für Business Engineeringde_CH
fhnw.initialPositionThis literature research is done to answer a problem created by the novel solution to the job-shop scheduling problem (JSSP) with a takt proposed by Walburger as a smart scheduling recommender system (SRS). SRS has the goal to reduce the makespan of jobs by introducing a takt so that each step of the jobs can be done in one shift (time-unit) and the next step of the job in the following shift and so on. This should reduce the makespan of the job to exactly the number of steps in shifts, which simplifies planning and helps keep the shop-floor footprint low since no large temporary stores should be needed. Furthermore, it should also guarantee delivery on time, since the makespan per product is now fixed to a certain number of shifts. This approach prompts the question of how job-shop scheduling is solved at the moment and what research has been done previously, in particular with a focus on a production takt.en_US
fhnw.leadThis literature research explores the research done into solving the job-shop scheduling problem with linear optimization and reinforcement learning methods. It looks at a timeline of the problem and how methods to solve it have changed over time. The research should give an understanding of the problem and explore possible solutions. For that, an extensive search for papers was done on Scopus, a research paper database. 27 promising papers were selected, rated, and categorized to facilitate a quick understanding of the problem and show potential research gaps. Two such gaps were found; Firstly, little research has been done on how reinforcement learning can be improved by implementing data or process mining strategies to further improve accuracy. Secondly, no research was found connecting reinforcement learning with a takt schedule, like the one proposed by the SRS project. The gathered papers give an extensive overview of the problem and demonstrate a multitude of solutions to the job-shop scheduling problem, which are discussed in detail in the results of this report. This should provide all the necessary information to be able to implement one’s own version of reinforcement learning for the job-shop scheduling problem.en_US
fhnw.procedureThe literature research yields a large number of papers trying to solve the JSSP in various ways. The found research indicates that the future of solving the JSSP is heading into the direction of self-learning agents like DQNs. The need for RL was already discussed in 1994 [27]. And as of today, a complete guide to how to design, implement, and validate a DQN model for the JSSP [33], is given over diverse papers. Open remains the question of how deep Q-Learning can be applied to other job-shop management styles, in particular the one of a takt, where no research was found, referring to the results shown in Table 1.en_US
fhnw.publicationStateUnpublisheden_US
fhnw.resultsThe JSSP is a highly researched topic with a lot of improvements happening over the years. Starting from a simple branch and bound algorithms to heuristics like GA and PSO, with the current state of research focusing strongly on RL in particular DQNs. Finding a niche to further improve and extend existing methods is difficult. The results of the literature show that so far no-one has touched the subject of RL in a takt production environment, like the one proposed by the SRS project, which should be further explored. From the solutions other researchers proposed, applying a DQN to the SRS project seems the most reasonable.en_US
relation.isAuthorOfPublication87aa3927-3516-4dbc-993e-9844f977042b
relation.isAuthorOfPublication.latestForDiscovery87aa3927-3516-4dbc-993e-9844f977042b
relation.isEditorOfPublication05c09e6c-1338-417c-9dd5-9089f70d15fe
relation.isEditorOfPublication.latestForDiscovery05c09e6c-1338-417c-9dd5-9089f70d15fe
relation.isMentorOfPublication27ad9db5-9c7a-4864-8084-c2a71e3d635d
relation.isMentorOfPublication.latestForDiscovery27ad9db5-9c7a-4864-8084-c2a71e3d635d
Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild
Name:
P7b-Schlebusch_David-Solving_the_Job-Shop_Scheduling_Problem_with_Reinforcement_Learning.pdf
Größe:
990.95 KB
Format:
Adobe Portable Document Format
Beschreibung: