This project is a machine learning project that uses deep learning to predict forest cover type based on cartographic variables. The forest cover type is determined by data from the US Forest Service, and the independent variables are derived from data obtained from the US Geological Survey and USFS. The goal of the project is to develop a classifier for this multi-class classification problem using TensorFlow with Keras. The project also aims to use hyperparameter tuning to improve the performance of the model, and to create clean and modular code. Technologies used in the project include Python, Jupyter notebooks, pandas, numpy, TensorFlow, scikit-learn, seaborn, matplotlib, and git.
View ProjectIn this project I created a scraper that accesses a map provided by the spanish ministry of telecoms and saves all of the useful data in it on mobile towers in Spain in an SQL database table and in a CSV file. The map is located here. In this project I created a program that allows users to access and scrape a map provided by the Spanish Ministry of Telecoms and save the useful data on mobile towers in Spain in an SQL database table and in a CSV file. The program is written in Python and uses Jupyter Notebooks for computing, SQL for managing the database, and Git for file version control.
View ProjectIn this project, I developed a computer program using Python, Pandas, and Jupyter notebooks that automates the process of downloading data from the Spanish telecoms regulator. The program creates a folder to store the downloaded data, allowing users to save time and effort by avoiding the need to manually download the data. This program has the potential to save a telecoms strategy consulting firms significant amounts of time and effort. The program can be accessed by forking the repository and running the provided Jupyter notebook file. Additionally, the repository includes notebook files for creating useful plots from the downloaded data.
View ProjectIn this project, I deployed a machine learning web application called "Food Vision" on Google Cloud that classifies images of up 12 different types of food. The web application uses Streamlit and uses Google Cloud's App Engine and AI Platform to classify images of food. By doing this project I learned how to deploy a Streamlit-powered web application using Google Cloud, how to use App Engine and AI Platform, and how to debug Streamlit deployments. I also learned about the cost of running machine learning apps on Google Cloud. The project uses technologies such as Google App Engine, AI Platform, Storage, Colab notebooks, Git Bash, and Git for version control.
View ProjectIn this project, I created a web page that hosts a habit tracker for its users. The habit tracker is based on the one proposed by James Clear in his book 'Atomic Habits'. I used HTML, CSS, and JavaScript to create the web page, and it has a table layout with interactive onclick events. The goal of the habit tracker is to help users develop positive habits and achieve their goals by providing visibility and accountability for their actions. Users can track a wide range of habits, and by regularly monitoring their habits, they can gain insights into their behavior and identify areas for improvement. I use it regularly to track the sports and exercise I do.
View ProjectIn these projects, hosted on CodePen, I created responsive websites using HTML and CSS:
In these projects, hosted on freeCodeCamp, I implemented solutions to algorithmic problems using JavaScript:
In these projects, hosted on CodePen, I created websites using React.js, a JavaScript library used to build user interfaces for web and mobile applications:
In these projects, hosted on CodePen, I created interactive visualizations using D3.js, an open-source JavaScript library for web browsers.:
In these projects, hosted on Replit, I created APIs using Node.js and Express.js. Node.js is a JavaScript runtime, and Express.js is a web framework for Node.js:
In these projects, hosted on Replit and Boilerplate, I applied the principles of information security to create secure applications:
In these projects, hosted on Replit, I used Python to perform scientific computing tasks such as data analysis, arithmetic operations, and probability calculations:
In these projects, hosted on Replit, I created web applications with tests using Jest. Jest is a JavaScript testing framework developed by Facebook for testing JavaScript code:
In these projects, hosted on Replit, I used Python to analyze and visualize data:
In this specialization delivered by Stanford's famous Andrew Ng via his company Deeplearning.AI I built and trained machine learning models in Octave and Python using NumPy and scikit-learn. The projects covered supervised learning (regression, binary and multi-class classification) unsupervised learning (clustering, anomaly detection, recommender systems with collaborative filtering and content-based deep learning, deep reinforcement learning)
In these projects I built and trained deep neural networks from their most basic elements using Python and applied them to various tasks, such as visual detection, recognition and natural language processing:
In these projects I used TensorFlow to build different types of machine learning models suited to different types of applications, such as computer vision, classification, regression, natural language processing or time series prediction:
In these projects, I deployed machine learning models in browsers and mobile applications, gained experience with TensorFlow data services and APIs, as well as with techniques for processing unstructured data and maintaining data privacy, using tools such as TensorFlow Serving, TensorFlow Hub, and TensorBoard.
Click on each line to expand it and see the labs associated with each. You can find all labs here:
Click on each line to expand it and see the labs associated with each. You can find all labs here:
This project is a data science and machine learning project that uses deep learning to predict forest cover type based on cartographic variables. The forest cover type is determined by data from the US Forest Service, and the independent variables are derived from data obtained from the US Geological Survey and USFS. The goal of the project is to develop a classifier for this multi-class classification problem using TensorFlow with Keras. The project also aims to use hyperparameter tuning to improve the performance of the model, and to create clean and modular code. Technologies used in the project include Python, Jupyter notebooks, pandas, numpy, TensorFlow, scikit-learn, seaborn, matplotlib, and git.
View ProjectIn this project I clean and analyze a sample of the English language Wikipedias Clickstream data from January 2018 using PySpark SQL.
View ProjectIn this project, I analyze a small portion of a dataset published by the Common Crawl using PySpark
View ProjectIn this repo I handle missing data and perform data wrangling, cleaning and tidying techniques for a stack overflow survey and then analyse it
View ProjectIn this project I use inverse probability of treatment weighting (IPTW) to determine whether the use of cover crops causes an increase in crop yields.
View ProjectIn this repo I agregate data from multiple files from the US census and I create histograms and scatterplots of the data to obtain insights from it.
View Project