Over the past two decades and especially since the United States presidential election 2016 there is an increase in unreliable news content. The question which got and still gets asked is how can we know which is which. To find an answer I started at opensources which curates a list of online information sources. The websites listed range from credible news to misleading and outright fake.
UPDATE: The website for opensources.co doesn't exist anymore.
To help me approach the project right, I split the project into three different tasks (notebooks):
- collecting content (01_data_collection.ipynb)
- data exploration (02_data_exploration.ipynb)
- creating a keras model (03_data_modeling.ipynb)
I am interested in trying to answer the following questions:
- how much unreliable content can we find
- what makes that content different
- can we create deep learning models to help us differentiate
What are the next steps?
- create a dashboard to copy/paste news content to make a prediction
- add classifiers to distinguish the text better based on its "content" and "context".
The following blog posts describe the work flow of each notebook and it approach:
Part 1: collecting news content for this project using different tools
Part 2: exploring our data to help us understand better what type of content we have
Part 3: building a deep learning model to predict whether a given text is reliable or not
Part 4: next step(s)
License MIT © Stephan Osterburg