In English

Developing components for a data-driven trading system

Göteborg : Chalmers tekniska högskola, 2018. 40 s.
[Examensarbete på grundnivå]

Data is an important resource, which through data science can yield many interesting insights and predictions. To utilize data, it needs to be collected, stored, preprocessed and modelled. The purpose of this project was to develop components that together form a simple data-driven stock trading system. Firstly, software to web scrape publicly available news and pricing data from the Swedish stock exchange AktieTorget, is developed. Secondly, this data is then used to test whether it is possible to create models and tools able of aiding/performing trading decisions. To support the development of the above-mentioned software and models, some theory is provided about stock markets, together with a walkthrough of the workflow and components of data science. The result is a system that utilizes a Python module, Scrapy, to automatically collect news and pricing data and then pass it on to a MongoDB database. The characteristics of the collected pricing data made it hard to work with, which was partly solved by manually collecting data from Nasdaq OMX. In addition to the system to collect and store data, three experiments were conducted to test the model and tool developed. All three experiments gave interesting insights, even though the results weren’t assertive. The single most interesting result was the model’s predicting performance for clustered signals. As a notice of potential future work, an API connection to a broker (e.g. Nordnet) could be developed. This would enable models to be used for real time trading. Moreover, the news and pricing data can be used together with natural language processing to create more sophisticated models.

Nyckelord: web scraping, data science, machine learning, stock market

Publikationen registrerades 2018-02-07. Den ändrades senast 2018-02-07

CPL ID: 254880

Detta är en tjänst från Chalmers bibliotek