I am always trying to expand the tools I can use to build great things. And today let me present an unknown player in the web development tech stack, programming language Julia.
Julia is a high-level programming language that was developed in 2009 and initially, the goal was to speed up the processing of bit amounts of data. But despite initial specialization, it appears that Julia is a very good general-purpose programming language.
Why Julia? It’s super fast because it compiles code at run-time with JIT (Just In Time) compiler. Thanks to that Julia can be as fast or even…
Computer science is amazing! Even simple, rule based algorithms can demonstrate complex behavior and create beautiful things. Random Walk is one of these algorithms.
It’s used in financial markets simulations, physics, computer networking, psychology, and even gambling. Practically any aspect of the universe which contains some form of randomness can be simulated with help of random walk.
Check this great sculpture in London designed by random walk algorithm:
Previously we were talking about building supervised learning models from scratch. And now it’s time to roll out big guns. I’m going to use the PyTorch library to build a model which is able to recognize handwritten digits. After writing everything from scratch It’s an amazing feeling of how PyTorch eases the process of developing and testing deep learning models. Let’s take a look at what I got.
Because we are going to use PyTorch it has to be installed on your machine. Also, libraries have to be imported into the python file. Here what we need:
Inspired by reading Stanislaw Lem works Golem XIV and Culture as Error. I read it in Russian and all quotes I translated myself so please let me know if something has been lost during translation.
In the Golem XIV Stanislaw Lem makes humans talk to the artificial super intelligent, which is in a state of recurring self-improvement. Golem level of Intelligence transcends human level and is able to gain level comprehension of the reality humans could only dream about. …
This abstract is a result of reading and notetaking of the Douglas Engelbart work Augmenting Human Intellect: A Conceptual Framework which was written in 1962.
The problem Engelbart invites us to tackle is urgent. Complexity of civilization organization and challenges we are discovering is growing exponentially and human intelligence, even though it is growing, is not growing fast enough. Especially human collective intelligence. As a result human civilization is not prepared to face complex challenges of today and tomorrow. We might face more and more catastrophic consequences for our inability to comprehend and solve complex problems. …
As we know there are a lot of levels of abstraction between the hardware we use and what we can see on the screen. We also hear that computers work with binaries which means the unit of data is 0 and 1 (or true and false). When we tell the computer to compute something, it can’t just take numbers and do a computation. First numbers have to be converted to binary and then the computation can be performed. I decided to dive into how computers do computations in binary. For simplicity of representation, I built all functions in python.
Previously we were learning how to create different parts of neural networks. To train different versions of our artificial intelligence we used abstract training data. And now the time has come to use real data.
We are going to teach our neural net to recognize handwritten digits with help of MNIST dataset. Also known as the “hello world problem” in the machine learning industry.
MNIST dataset contains 60 000 labeled data points (images) as training dataset and 10 000 testing dataset. Original data stored in binaries files so we will need to do some extraction work. …
My colleague at MIT Farzad inspired me to systematize my thoughts about the learning process I follow since I began to study computer and data science. In this article I will try to express a short and precise summary of critical parts of the learning process especially when you are self-tough.
Fog of war
When we are learning a completely new subject we don’t know what we don’t know. And we don’t have such a lecture as a professor who can map it for us. So we have to do it ourselves. At first I would focus on key concepts…
As you might know right now I’m MIT ReACT learner. This week we began a Computer Science course and the first assignment was to come up with the idea of using graph data structure. I thought I would use a knowledge example of a real world example of graph data structure. Also, I thought I should create one which will represent what we are going to learn through the program and the opportunity to show off during the class is priceless.
Previously I already wrote about graphs data structure so I am not going to repeat myself and…
Last time we were talking about adding hidden layers to our neural network, and it did well solving binary classification tasks. But what if we have more entities in our ontology? Let’s figure out how to teach our neural net how to solve multiple entity classification problems!
When it comes to the neural networks training dataset comes first! We will generate 3 segments of the dataset to demonstrate how to work with more than two classifications. Each segment will contain two dementia arrays of coordinates and we will specify the range of these points to group them together. Therefore each…
Software Engineer, AI researcher and Transhumanist.