Welcome,
how can we help you?
I need more information about:
Herzlich willkommen,
wie können wir Ihnen helfen?
Ich benötige mehr Informationen über:
Get in touch
for investment related enquiries
Nehmen Sie für investitionstechnische Anfragen Kontakt mit uns auf
Download Kasko2go app
Download Gut-versichert app
Download Universalna app by kasko2go
Step One
Data Correlation
Receiving the data we want from our customers, where our goal is to comprehend the material and its content. We examine the data first, then put together a list of questions regarding it that will be submitted to the customer for clarification.
Step Two
Data Cleaning
At this stage, we will start looking into what the dataset is missing, what data can be used, and which data needs to be removed from the dataset as it cannot be completed later. Cleaning the data involved searching for missing fields and either getting the data somehow or removing the data line completely.
Step Three
Data Augmentation
This stage is when we add our own data to the data of the customer. At this stage, we search for ways, some obvious and some not as much, how to add additional data to the data we have. For example, if we know the time and the data of a data line (a row), we can add the weather or the risk that was expected in this location at the specific time and date.



This stage makes the basic dataset into an augmented dataset and is extremely important for our data research. It is one of the key factors in us being able to get better performance than our customer on the result.



The augmentation is not a trivial step as just adding additional parameters will quickly overwhelm the ability of the dataset to support any possible conclusions. That means that when we add additional parameters, we still need to run algorithms that will tell us if the parameter is important enough for the model we want to build or not.

Step Four
Data Compilation
At this stage, we add data that might be missing from the original dataset. For example, let's take into consideration our Risk Map. The risk map, as part of its makeup, contains the data about traffic accidents. The problem with this data is that Traffic accidents do not happen on every segment of the road but to calculate risk on every segment of the road we must have accidents happening there. Moreover, the fact that a segment of the road has never seen an accident does not mean that an accident will never happen there, so just writing that the Risk is 0 for every segment that has never seen an accident will not be correct.



Hence, we need to complete our accident data somehow, we need to be able to assign an "accident" to a place where there was no accident to reflect future possibilities. This type of process is much more complex than any of the steps before this one and takes a very long time to complete for the most significant parameters of the datasets we analyse and use.
Step Five
Data Analysis
At this stage we build the model that will later be used by the customer, the process of building a model varies from dataset to dataset and from customer to customer while maintaining some of the core elements. The details of our analytics methods are a trade secret and will not be discussed in this public account.
A Quick Curriculum Vitae of our Core Team
These are just some of the alma maters of our esteemed colleagues and their credentials.
Technion - Israel Institute of Technology
Haifa

D.Sc. Aerospace Engineering

M.Sc. Aerospace Engineering
Kharkiv University of Air Force
Kharkiv

Ph.D. Cybernetics, Control System & Intercommunications

M.Sc. Mathematical Maintenance of Automatic Control Systems
Ben-Gurion University of the Negev
Be'er Sheva

Ph.D. Condensed Matter & Quantum Computation

M.Sc. Condensed Matter & Quantum Computation

B.Sc. Physics
Donetsk National Technical University
Donetsk

Ph.D. Systems & AI aids

M.A. Economic Cybernetics
We use a neural network to take multiple multidimensional datasets and optimize them into multidimensional output which allows up to give our clients optimization models on multiple parameters.

Neural Network
This image shows a basic example of a neural network

Dimensions
The more dimensions used, the more complex the data becomes.
To make the multi-dimensional data easier to comprehend, display and work with in a 2-dimensional plane, we must use a dimension-reducing algorithm to break down or compress and unfold the data while keeping certain relations between data points.
After successfully bringing the data to a more easily read format we, in a third step, analyze data clusters and create a separation in the otherwise over-cluttered cluster. This allows us to differentiate to a degree between profitable/positive and costly/negative contracts/premiums/drivers.

No 3D Glasses Required
By bringing the data down to 2 dimensions, we can make better use of it.
About
General
Resources
FAQ
© 2017-2021 kasko2go AG. All rights reserved.
Get in touch if you have any questions about our insurance solutions or technology.
Need assistance?
EN
Application:
EN