Scraping Ticker Symbols From Finviz Screener with Selenium in Python

in #python4 years ago

im-87777.jpeg

If you are a stocks trader, investor, or someone interested in stock market you may have heard or used Finviz. Finviz is a great website that provide many tools. Among which I really like is their stock screener. It provides filters to sort out stocks/companies based on various conditions and criteria. Main categories for filtering are Descriptive, Fundamental, and Technical.

I have used this screener before to get a sample list of stock tickers for other trading strategy testing tools. I realized my old list became outdated. Over time some stocks get delisted from exchanges, new ones added, some even change ticker symbols. It is good to renew the sample list from time to time. And it should take too much of an effort is this process is automated.

With a little bit of knowledge of python and selenium this process can be automated. It is also nice to it in actions when browsers opens up and goes through hundreds of webpages in few minutes for work that would otherwise take hours manually. As I was getting ready to update my sample list of tickers, my old script kept giving errors. The issue was that my chromedriver was outdated as well. So, I had to download the latest chrome driver and update the script.

What I am trying to do is, go to Finviz, using their filtering view to get the list of tickers. But I don't want to apply any filters. This time I want to get everything ticker symbol that is available. The page shows me list of 20 companies with ticker, company name, industry, market cap, price, volume etc. While this time I only need tickers, in the future it is good to know that other data is available too and can be obtained with a similar process.

The page only shows the first twenty companies. To view the rest of them, I would need to click page numbers at the bottom of the page. There are 397 pages total. That would be a lot of repetitive clicking. Luckily, I am automating it and browser will go through the pages automatically.

As the bot goes through the pages it will keep recording the ticker symbols from each page. After its done, it will print them on the screen in a list format. From there I can simply copy/paste and store them in a separate file and use them in other projects. I ended up with a list of 7940 ticker symbols.

Here is the code:

import os
from selenium import webdriver

def get_file_path(folder_path,file_name):
    path = os.getcwd() + folder_path
    file_path = path + file_name
    return file_path

def get_page_urls(url):
    driver.get(url)
    pages = driver.find_elements_by_class_name('screener-pages')
    last_page = int(pages[-1].text)
    url_list = [url]
    for i in range(1,last_page):
        text = '&r=' + str(i * 20 +1)
        url_list.append(url+text)
    return url_list

def get_ticker_symbols(url):
    driver.get(url)
    symbols = driver.find_elements_by_class_name('screener-link-primary')
    ticker_list = []
    for sym in symbols:
        ticker_list.append(sym.text)
    return ticker_list

CHROMEDRIVER_PATH = get_file_path('/Desktop/python/','chromedriver')

url = 'https://finviz.com/screener.ashx?v=110'
driver = webdriver.Chrome(executable_path=CHROMEDRIVER_PATH)

url_list = get_page_urls(url)
full_list = []
for url in url_list:
    ticker_list = get_ticker_symbols(url)
    full_list.extend(ticker_list)
    
print(full_list)
print(len(full_list))
driver.quit()

In this case, I didn't apply any filters. However, it might be better to apply filters for different situations like creating a stocks watchlist. We can easily apply various filters on Finviz. Then we just need to copy the url and assign it to the url variable in the code. This code also requires Selenium to be pip installed on the machine, and latest chromedriver downloaded and the location to the chromedriver specified in the code.