RIPE 82

Daily Archives

MAT Working Group session
.
RIPE 82
.
19 May 2021
.
13:0 (UTC plus 2)

BRIAN TRAMMELL: Good afternoon, everyone. We'll get started in a couple of minutes.
.
It's one o'clock, Nina is here. All of our speakers are here, let's go ahead and get started.
.
We have the Chair slides. I can't figure out how to get the ‑‑ the Chair slides are apparently not pre‑loaded though. So, there is a very nice picture of a hammer and chisel set there. Go download them yourself.
.
We have a pretty full agenda today, so I think we'll go ahead and jump directly into it. We have three speakers: Luuk Hendriks, talking about interactively analysing the RPKI with resources JDR.jl; Christian Teuschel talking about RIPEstat's 10th anniversary; and Robert doing a RIPE NCC tools update. And then, with time available, any other business.

With that, I will ask Luuk to come on up.

LUUK HENDRIKS: Good afternoon, everyone. My name is Luuk, I am with NLnet Labs and I want to tell you today how we can interactively analyse the RPKI using a thing called JDR.jl.
.
The short version of it is, well, what the title says, we want to do this explorative, iterative, interactive analysis, things within the RPKI, but sometimes that's quite hard to do. Just to make sure we're on the same page, I will explain all the different parts the title before we lead into a demo where I show how we use this tool.
.
Doing things backwards. I assume that most people by now have at least heard of RPKI, but just to sum up the, let's say, the crucial parts that we all need to understand to go through this presentation. The RPKI stands for the Resource Public Key Infrastructure, And that's basically to make verifiable statements saying, hey, this certain prefix can be announced by this ASN. The way this is distributed is that, for example, the RIPE NCC has a set of resources and they hand out certificates to their members saying this is part of the address space that's allocated to you, so you can say which ASN might announce prefixes within this block. Then somebody on the other side of the planet can fetch all this information from this public repository ‑ the RPKI is a public repository ‑ see what is announced on BGP and then verified, well, this is a tool, it originates its prefix. That seems okay, let's accept this route for example.
.
There is many, many, many more nitty gritty details to it, but that's far too complicated for these 15 minutes today. What is important to realise is that how this all built up, this repository, it's a combination of the certificates, so that's X 509 certificates with number resources listed on them. There is a manifest, a certificate point to a manifests, lists another bunch of times. Again, they are X.509 revocation lists but also ROAs. The ROA is in the SMS format. The ROA is what contains the middle sentence here, the attestation saying, hey, this is me, announce this prefix.
.
Now, if you fetch this public repository and you go through it on your hard drive, your looking for this ROA that you have just created. You end up in a very complex directory structure, something like this with all these cryptic file names, etc., etc. That's not very convenient. So, let's see, if we finally found the ROA that we think we made, we forgot that it is all binary X.509 CMS stuff and we got the thing and we see oh, know, a couple of readable ‑‑ there is a couple of readable strings in here the rest is just binary stuff that at the very least will break your shell. Realising we might need this specific tool like OpenSSL to get to a human readable form of what is in this file.
.
Now after you find yourself for the second time on stack overflow or stack exchange, finding out this particular argument you need to pass to OpenSSL to get what's in this file you create this to make your life easier, and then finally you can get into this human readable form of the ROA that you just have created.
.
Perfect, right. So you want to verify that what's in this ROA. In this CMS object is actually what you think you have put in there and for an ecosystems object, that information is over here in then caps laid content.
.
Lo and behold, that's so RPKI specific that that's not decoded by OpenSSL. So this is still useless.
.
Right. So, that's the RPKI and that's also the motivation of why we want to have something better when doing these forms of interactive analysis, right. Because now you found the ROA and most of it was human readable in the end but what if you want to find the manifest that initially listed this radio and the certificate that pointed to the manifest?
.
You have lost all those relations, so, the main thing we're after is, we want to fetch all this data from the RPKI and then we want to go through it in explorative manner. Also, in a way that we're used to so that's either an interactive shell, a Repl, you know this from Python or JupyterLab, you can imagine an scenario why you are relying on a party software that normally use this is public RPKI data to say okay, this seems to be a valid prefix ASN combination to be announced, it spits out maybe a non‑descriptive error saying hey, there is not manifest file, that's broken. It doesn't again identify you the full path. It just gives you the base name. Can we find the certificates? Can we find all the stuff that's below the certificate in that tree, etc.?
.
To do that, we came up with a thing called JDR.jl. So the .jl gives away that this is a Julia package. For those of you who are not yet familiar with Julia, it's an interpretative language, like Python, but Julia is also ‑‑ it's a just in time compiled compilation ‑‑ compiled language. And that's great, because the first one might be a bit slow but after that the stuff is quite performance. And because it's interactive, because it's interpreted we can use in NL boot and in this interactive shell and we can explore, in this case, RPKI, without exactly knowing beforehand where we will end up.
.
We'll see in a bit what components make up this library. But if you are not interested in using a library like this, you just want to explore in an easier what is in the RPKI, you can go to this website, this is a web front end based on the JDR.jl library, and in here you can explore the repositories and all the delegated repositories, you can search for resources, you can search for file names, and you also can get to the nitty gritty details on the right here of the ASN.1 structure with readable things to really help you understand what is in this binary blogs and are they correct; are they perhaps violating a certain RFC and so on.

The components of the JDR.jl, there is a couple of modules, ASN.1 to create this annotated structure so we really do all the decoding from ‑‑ by ourselves. There is the PKIX module that does the RPKI specific annotations of this thing and checks whether this stuff is actually RPKI valid manifests ROAs etc.
.
And then the thing we are really going to talk with here now is the RPKI module that gives you the data structures and the types of functions to work with the results of these two modules. There is some more stuff in the package, but we're not going in there now.
.
So, let me have a look at the time. Yes, let's quickly go to this demo. We'll see a couple of use cases there for operators having this error message from their software, but also historical analysis which might be more of a use case for researchers, for example.
.
So now I am going to switch tabs and now my window... now I maximise again, do you still see my browser window?

BRIAN TRAMMELL: Yes, it's working.

LUUK HENDRIKS: If you want to follow along, there was a version on the slide that I just showed. This is an interactive version, so there is a Julia kernel running on my computer here. I did run out the code beforehand so we don't have to wait for this first compilation step even though that's not too long for most of the stuff that we're going to see here.
.
So there is some installation notes if you want to do it later yourself. The most important thing is we're going to get to this JDR library into our work space, and this is done with the user key words of Julia similar to import and other languages, that's not so interesting. What's interesting is that info message here, it says it's found our configuration file.
.
This here. The important part is we configured for this one to look at the AFRINIC and the RIPE trust anchors, so we're not going to look at the entire repository, we do that if we input all the RIRs, but for now we're interested, for whatever reason, in RIPE and AFRINIC.
.
Okay. Knowing that ‑‑ we'll first run the, I guess the most common function that we'll use if you are using JDR which is process task. This process, the configure trust anchors and spits back the most two important data structures when you are using JDR, a tree and a lookup. So, the output here is output from the actual run. So, there is a couple of things wrong, there is some manifest missing but that's not too interesting for now.
.
First we're going to look, what is this tree thing?
.
Because this is the most important data structure in JDR, this is where everything relies on and is built upon.
.
So what is the type of this tree thing, it's an RPKI node, is something from this RPKI module within JDR that represents a file related to other files reason the RPKI repository. So as you can see it has a parent, which is another RPKI node. It has children, pointing to zero or more children, and this way you can imagine that we can build a logical tree representing these certificates pointing to manifests, manifests pointing to zero or more children, etc.
.
So let's have a look at the tree that we just created with our process function. That's an RPKI node that holds a route certificate and that's a synthetic object that's not in the RPKI but all the five RIRs have their own trust anchor, we tied them together in this one node with one entry to the entire tree.
.
Now, let's have a look around. What's in the children of this tree? Well, it's two trust anchors that we configured in our tunnel file to process. We see the AFRINIC certificate and we see the RIPE trust anchor certificates. So this means that from the children we should be able to get back to the parent. So let's see. Yes, indeed, we come back to the route certificates, to the synthetic route node. If we go to any of the children and back to its parent and to show you that this all really works. The tree is of course the parent of its children and the parent of child one is the same parent as the parent of child two.
.
So, if we scroll a bit back, we'll see that there is an object in an RPKI node and this object is what is the actual file that we process in this ASN.1 module and PKIX module. And there is a couple of types of these RPKI objects for the certificates, for the manifest, for the CRLs, for the ROAs and each of these have specific fields that are particular for these types of files. The manifest, we can see that here ‑‑ by the way, this thing with the question mark is a search function which works in your session for Julia or in your notebook that searches for a documentation within the source code ‑‑ and there we can see that this is documented listing all the fields and one of these fields is files. Next listing all the file names that are listed in that manifest.
.
So, imagine if you used OpenSSL and you decoded this manifest file, you would see a lot of human readable things. But typically, this files field is in this encapsulated content field which is still shown just as a binary block because OpenSSL doesn't know how to decode a very specific thing like that.
.
So, we have seen the RPKI nodes with their parents and their children, so the logical tree that represents the entire RPKI repository, if you will. We have seen some of the files that are attached to that ‑ for example, the manifest file.
.
Now, the other thing that came back from our main thing is the lookup thing. A lookup is a data structure that keeps track of certain things we might think are interesting for our further analysis and they simply points to specific RPKI nodes.
.
So, if we look for the documentation here, there is actually no documentation yet, but automatically the fields are listed; for example, you see an ASN field which is a mapping from an autonomous system number to one or more RPKI nodes again. So, with this, we can quickly base it on AS number, two RPKI nodes that are somehow related to that autonomous system number.
.
There is some helper functions in JDR.jl. For example, search, which you pass, it's lookup that they created and a certain autonomous system number. And, oh, yes, we have two ROAs that list these autonomous system number as their AS IP.
.
So let's go to the first scenario, and find specific files within the RPKI addresses. This is built in Julia, it's called 'Methods', if you know there is a thing called search where you you are not sure how to use it, you can actually call for all the methods signatures here and there we find hey, there is this first file we saw, I just used it to search for this. There is this other thing that takes a string and the parameter is the file name. I guess we should use that.
.
We have a file name. This is just a random thing I picked from the repository and we're not using the full file, we are just using part of the file name. I could even take out some of the characters here, and we're going to run ‑‑ yes, we find something.
.
Some explaining, perhaps. This is a pipe operator used to change functions in Julia. It's actually equivalent to what you see below here. So, we search, we call values up on that, just get the values and we call first to get the first value. Why do we do that? Because the thing that that is returned here is actually a mapping from the string to the full file name, because we search for partial file name, mapping to the RPKI nodes that is ‑‑ that belongs to that file name.
.
So, by only looking at the values, and I am taking the first one because we only had one result, we get the actual RPKI nodes that holds this manifest file.
.
And it's important that we keep working with these RPKI nodes because then we can always go to the parent and to the children of this specific file of this RPKI node.
.
So, say, there is something wrong with this. Well, perhaps it's actually caused by something in the certificates above, pointing to this manifest and we can still go to the parent and get the actual objects, which is in the a manifest but a certificate with some additional information.
.
Now, let's get back to the manifest. We have a feeling something is wrong and now we're going to use what JDR was actually built for, and that is the way this very nitty gritty detail ‑‑

BRIAN TRAMMELL: I will take the ‑‑ you know, the demo gods sort of like waving their hands at you to also give you a time check, we're two minutes. So this is an awesome rabbit hole, but I would make sure we have time for some questions because there is some discussion going on in the chat here.

LUUK HENDRIKS: Okay, this is really a pity, because this is the most beautiful thing, but it should be in the generator, in the static version of this node book, so I will just briefly say what else is is in here.
.
We can iterate this RPKI node tree and with that we can enter the tree at any point, so for example the synthetic route note with we are going to iterate over then trier tree Tor specific points like this manifest and we can collect all the things, all the RPKI nodes below.
.
Using other well known libraries from the Julia ecosystem, we can then use for example these map and this filter functions that allow you to query investigate vectors and sets and those kind of things, so by using query.
.
We can see what are the types below these very RPKI nodes? If you have found something that you think is causing a problem, then you can actually get all the ROAs below by filtering for a specific object type and with the ROA, we know a ROA holds VRP so we have to held the friction to get the VRPs. Then eventually we know if there is a problem, all these prefixes might be affected.
.
Another thing, another cool thing is this function called link resources where we again use RPKI nodes and this time we map resources as listed on certificates to RPKI nodes that actually use these resources.
.
So ‑‑ and this is the specific type, let's not go in there now.
.
If we use lookup and go over all the valid certificates and we look for any resource that's value, not pointing to an RPKI node, then we know that that is a resource that is not actually used, and apparently there is more than 30,000 certificates having one or more of these things going on.
.
Let me also do a plot. Yes, it's here.
.
We want to know the distribution of the number of files on manifest, that's also easy to do. We filter again manifest and just get the length of the file fields which is in there, using some other things from the space, also a well‑known Julia package, we can create this.
.
Then we can do more, like finding out where are the manifest with more than ten files, kind of thing.
.
Lastly, very briefly, we could also use it to process historical data. The RIPE NCC collects RPKI data on a daily basis. This is in a bit of a different directory structure, so you can use a function called Process CA where you pause and we accept the certificate that you want to start the process with a directory where the data resides. You can do the exact same things as we did before and collect them in what we call here output, which is just the names typical of some fields, so we count unique ASNs, etc., etc.
.
Well ‑‑ and then you can use it as you see fit. You can plot a couple of things. So let's see in the last five years we have seen an increase in the unique ASNs, obviously, and number of ROAs in the RPKI, everybody is interested in v6, we also see an increase there. But the number of v4, that's what really blew up.
.
If you are using notebooks you might like data frames like Python. That's also available for Julia. You can convert it to a data frame and use it as you are used to.
.
I am really running out of time so let me quickly go back to the presentation.
.
We have a lot of plans where we're still learning every day and there is a lot of more cool stuff that we can do with this, especially with the aspect of time, like differences between two points in time. If you want to know more, check out the GitHub, there is the code, there is the documentation, there is this static version that does work and has the nice printout of the structure.
.
Learn more about RPKI here and Julia here.
.
I don't want to skip this one. A big thank you, of course, for you all listening to this very quick demo, but also the NCC community projects funds that enabled us to carry out this work.

And with that, and I guess a couple of minutes over time, I am not sure if we have time for questions, but I am happy to take any.

NINA BARGISEN: Do we have time for a very, very short question? If any shows up, otherwise we recommend that people will find you afterwards maybe in the SpacialChat, if you are there.

BRIAN TRAMMELL: Thank you very much for ‑‑ like, my big takeaway from this is, I should go look at Julia again because it looks like it's actually working; like, the last time I looked at it, it wasn't so much. If nobody jumps in with a question, there is one that I'm going to steal from Peter Hessler in the chat because I think it was a good one.
.
So, you are basically pulling things in to do a sort of semantic analysis of the RPKI tree. How do you handle weird sort of invalid or corner cases in the RPKI tree? Because there is a lot of ‑‑ the data quality on these things is, you know, in the high 90s. Like, how much of this work was basically dealing with the fact that the data itself is unclean and how do you represent that and can you use this tool to analyse that uncleanliness?

LUUK HENDRIKS: That's a great question and actually this is the reason why we initially start with JDR and that's why it's a pity that the printout didn't work.
.
But let me ‑‑ because I want to show you exactly what is printing.
.
There we are. So, like I say, we do this whole decoding ourselves for the very reason that we can be very lenient in what we accept. We actually check for in this digest algorithm thingy. There should be two children here ‑‑ no, actually, there should be one, but this nil is allowed.

BRIAN TRAMMELL: Got it.

LUUK HENDRIKS: So we really annotate it on this very, very detailed level. Now, these things are really hard to test, so we typically from the thing breaking ever in time. That's the downside of Julia also, but let's not go into that. But that's a level we try to provide the user information with in this thing, and then, you know, if we eventually end up with all the listed files on the the manifest, that's already quite high level. There is a lot of different types of remarks, etc., etc. This is also a web interface so please check it out.

BRIAN TRAMMELL: Excellent. Thanks so much. Time for like a ‑‑ I took the question slot. Time for like a 5‑second question if somebody has one, otherwise please find Luuk or otherwise SpacialChat or we're around for the rest of the week.
.
With that, thank you very much. Round of applause.
.
I really miss that about like being in a room, it's like you don't even really get the round of applause. Thanks so much for the presentation.
.
I will go ahead and ask Christian to come up to give us the long form history of RIPEstat. Take it away.

CHRISTIAN TEUSCHEL: Thank you, Brian. All right. Then let me first share my slides ‑ yes, I do want to share them.
.
You can see, you can hear me, I assume. So, as Brian said, welcome to the RIPEstat version of the ten‑year anniversary of RIPEstat. We do have ‑‑ so, we do have a small overview of what was before RIPEstat and then I am going to present highlights of the past ten years and at the end I am going to give an outlook.
.
So, history has it that there was something before RIPEstat. And there was the tool called REX, the Resource Explainer, and, according the documentation, REX is a one‑stop shop for everything you wanted to know about Internet resources. And the interesting thing with REX was, this was heavily connected to a database called INRDB, which was a custom‑made database at the RIPE NCC. It was described as a non‑conventional database, and this is a very interesting aspect, because we also have to learn that, over the past ten years, that the features that you are going to provide in an information service are very tightly connected with the features that the backend provide.
.
Then, as a reference point for what we are going to see in the ten years of RIPEstat, I wanted to mention that according to, again, documentation ‑ RIPE Labs is a very good source for that ‑ REX was receiving 400 requests in the first month.
.
Then, let's go to 2010, end of 2010, I joined the RIPE NCC, and that was one of the first projects that I was working on and I had to learn the problem statement. The problem statement was that there are a lot of data and information about Internet number resources but they were scattered all over different platforms, tools or locations.
.
So, you can imagine that basic idea of RIPEstat was to combine all of these services into one easy‑to‑use interface. That one interface I think we didn't really manage, but I'm going to talk about that a little bit later. On the right side you are going to see a wire frame and you probably see the resemblance with RIPEstat.
.
What definitely hasn't changed since then is that we were focusing on two elements: We are going to provide statistics and we're going to provide status for the network or parts of the network.
.
If we would look at finding an update when RIPEstat was born, then I would probably vote for the 25th January 2011, because this was when we presented RIPEstat to a bigger audience in the first public demos that we did.
.
2011 was, for us, a very eventful year, because just two days after we released RIPEstat there was the Internet outage in Egypt, and in the sense of being this one‑stop shop for Internet information, we created a special page for that. That was visited 40,000 times in just the first four days. Then, a few days later, there was the earthquake which had an impact on the Internet in Japan. We also created a special page for that. But if you are going to visit these pages, you will see one thing: For example, with the Internet outage of Egypt,, we were basically monitoring a list of prefixes that was definitely not ideal, because we were missing some essential features, and you are going to see what kind of features that are ‑‑ I'm going to get to that a little bit later.
.
In the course of 2011, we also moved from a single virtual machine to four real metal machines. Nowadays, we move back. Right now, RIPEstat is hosted on 40 virtual machines. And a little bit later we also released a mobile version of it for IOS that was not too successful because we always had a bit of a problem with the (something) towards the main version. That should be right now in ‑‑ with a new interface.
.
Two very important things that we also added in this year was the introduction of the data API and we added widgets, and that together basically created these three layers that we still have in the nowadays RIPEstat, but internally, of course, a lot of things changed.

Usage‑wise, we were at around 10,000 requests per day, and then the year after that, that was mainly seeing the improvements to what was already there, we had our first user interface update. We had the embedable widgets and we saw an increase of the usage to 200,000 requests per day.
.
Then, in 2013, we added BGPlay to RIPEstat and I assume that everyone knows BGPlay. What you might not know is that BGPlay, as it is in RIPEstat, was not the first implementation of BGPlay. It was implemented by Roma Tre University before that. That is, of course, not that convenient if you want to have that in a web page. To we joined forces with the Compunet Research Lab also from Roma Tre University and created an open source tool, which then was implemented as the RIPEstat widget.

And there I would like to mention Massimo Candela, he was leading the efforts for the implementation. He was from Roma Tre University but later on joined the RIPE NCC. And I think if my information is correct, then it's his birthday, so happy birthday, Massimo.
.
There were a couple of other things that also happened in 2013. First of all, we had another user interface update that had something to do with look in field was not aligned with the RIPE NCC style, so we changed that. We also integrated RIPEstat in RIPE Atlas, so the traffic graph and all the building measurement graphs that you are going to see by Atlas are powered by RIPEstat.
.
Then before I mentioned that, there was one single feature left that would have helped to better analyse Internet outages, and that was added with the country routing stats. Since then, it has been numerously referenced on Twitter, Facebook and all other social media channels as a monitoring tool for Internet outages.
.
2014 was a very interesting year for us internally because this was the first time when we realised that we created an information service that have been used by a lot of people and then you are going to enter a phase that is called maintenance. I think for every product lifecycle that's completely normal. So we did quite a lot of development on the backends. From a usage perspective, we moved up to 3 million requests per day.
.
Then we celebrated the first five years of RIPEstat and, believe it or not, we had another UI update. The corporate identity of the RIPE NCC changed, so we got a new logo and different design elements. So we, at RIPEstat, also had to follow. Then had plenty of backend changes as well, so that kept us busy, and is still keeping us busy, in a way. And at the same time, we abandoned Internet Explorer 8 for Windows XP. This doesn't sound to be a big deal, but back then I think it was a big discussion.
.
In this year 2015, the team size of RIPEstat was also one person, and you are welcome. Usage‑wise, we moved up to 6 million requests per day, and that continued in 2016.
.
At the beginning of the year, we started with around 2 million daily requests and then we ended the year with 30 million requests, so you can imagine that scaleability was something that kept us busy, and of course backend changes.
.
2017: We had our longest service interruption. One was four hours and three hours. I think, in general, you learn anyway more from failures than from successes, and I think we did that also.
.
A light point in 2017 was something that Job Snijders created, so he cheered us up with creating this new Cat in one of the widgets that we had that was the routing history widget, and this was a very interesting visualisation. First of all, very creative I think, but I think that should not take a surprise because Job is also the art director at OpenBSD. The way he did it was that he used, I think, 175 /24 prefixes, and controlled for each of them, over time, the visibility, and that basically created a kind of thing, a routing history widget. Probably, right now, I can understand we're running out of IPv4, but nevertheless...
.
In the same year, we also added a project, which was called back then, Country Reports, so that was a statistical view on different economies in the RIPE NCC region. Right now, we changed it to Country Routing Stats, because Country Reports has been taken over by our colleagues from ER.
.
Then, needless to say, we did more improvements to the backends. Usage‑wise, we move up to 50 million requests per day.
.
2018 was rather ‑‑ well, was not that eventful. Probably for me, it was very helpful that we added additional FE to the team and we added historical WHOIS widget and we added RPKI features.
.
Usage‑wise, we were basically at business at usual, between 30 and 60 million requests per day.
.
Then we're going to get to 2019, which was a very important year for RIPEstat, first of all because we were joined by a dedicated UI engineer, and that gave us the resources and the ability to fundamentally rework the use interface, which you probably heard ‑‑ or have seen, since yesterday, we have been in user interface AP. In the same year, we also did more outreach. The first outreach was going towards our peer RIRs, and we started the RIPE inter‑RIR collaboration.
.
The basic idea about the collaboration was that we are going to create a regional representations of RIPEstat with basically a local content, local data to serve the region much better. The first implementation that we did was in May with APNIC, that was Netox, and a few months later we did something for LACNIC. And (inaudible) was the first user face that was available in 3 different languages, so we had Portuguese, Spanish and English, and all these translations were done by LACNIC.
.
Then, we hit a milestone with reaching 100,000,000 requests per day, and, just to put that in perspective, I looked up what the daily query rate on Google was in 2019, and that kind of means that we would have processed 2.8% of Google's daily searches. And given that we had probably not too many engineers working on that, if you are going to interpret that 100% with the amount of people that we had working on the service, then I think that Google would ‑‑ had equivalent of 70 engineers in 2019, at least in September, I doubt that. But that was not the only thing that we reached, or which record we broke in that year, because, just two months later, we reached 130,000,000 requests per day and after that we simply stopped being obsessed about these numbers. Also, we didn't want to make Google look bad.
.
Then, last year, we continued with this collaboration. So we added the AFRINIC Internet registry and routing statistics for AFRINIC. And just to show how much efficiency we gained with implementing these, this information service, that was all done within two weeks.
.
Then at the last RIPE meeting, we released a new user interface after one year of work on that as a beta, and then if you are going to arrive in this year, we obviously have the ten‑year anniversary of RIPEstat. One part of this cake is for Massimo. So we need to leave that.
.
We also opened up the widget code base, so we shared that with our RIR partners and we also plan to open source in general the widget API because we don't fully want to up end on it with the release of the new user interface, which happened yesterday, and, with that, we also kept our promise that, at RIPE 82, we are going to release this new user interface.
.
Just an hour ago, we had a drop‑in session where we explained all the features of the new user interface. I do believe that there is also a recording, so if you are interested in using the new interface and want to hear all about its features, I suggest you look at this recording.
.
Then a quick outlook for the second half of this year. So, before, we spend around two years focusing on the new user interface, that is right now done, we are going to move the new user interface into maintenance and right now the team will focus on the data API and also on the infrastructure, so that's why we are expanding to the Cloud for the data API, we want to harmonise the API document, we want to focus on service and data quality and we want to update the documentation, which is probably very due.
.
At the same time, we want to start thinking about the revamp of BGPlay, also to integrate it into the software stack of the new user interface, and I think, after eight years of service, that's more than time.
.
What I mentioned before, we also want to open source the widget API so that everyone can create their widget, and we promised that we still keep those widgets being online so we can still provision the widget API, just moving it into a more community‑driven model.
.
Next will also be we're going to look into machine running to see what is possible there to add new features to RIPEstat and simply gain more insights over the data that we have.
.
And, of course, we're going to continue with collaborations.
.
Then, right now, I am almost at the end of my presentation, but I want to just have covered three topics that are very important for me.
.
First of all, it's the spelling. For the widget and users and people that know other services from the RIPE NCC, you have probably noticed that there is RIPE space NCC, RIPE space database or dB, RIPE space SLAs and so on. But there is one exception, and that is RIPE no space stat. I personally have no explanation why that is. I also talk with people that were around back then, and I didn't get an answer on that. So from that point of view, you have a very good reason to be confused. And the downside of that probably, of that confusion, was that with that naming, we entered basically ten years of a proliferation of different spellings, to the point that we couldn't even recognise it if you meant the service or not. And for that reason, I also added an example ‑‑ a good one and a bad one. Let's start with the bad one first.
.
Look it up on stat. You know the tool from RIPE.
.
There are basically already two things that would cause sleepless nights for our communication department. First of all, it is of course RIPEstat, not stat. But on the other side, it's not also RIPE, because it is the RIPE NCC, where the secretariat for the Internet community, in Europe, Russia and parts of central Asia, and that is called RIPE, but that you probably already know.
.
Then, what you are going to see here is an overview of feedback that we got from users, and before you wonder, NPS is simply a measurement score, the net promoter score, and simply provides you with an assessment on how likely it is that you promote a service to friends or colleagues. I think it doesn't say anything about enemies or competitors. But it ranges from 0 to 10. Overall, we are scoring very high on that, and that is definitely a reason for me to thank all the users, first of all for sticking and with us for the past ten years, I think we have not always been perfect, but we are working on that, and also helping us to create a huge user base, because the 100 million requests that we got in one day that was not coming from one single IP address, that was coming from actually more than a million of unique IP addresses in this Working Group session, I think I don't need to explain that some IP addresses are being shared between multiple users, some users have probably more IP addresses, so a rule of thumb is that the number of IP addresses that we're going to see is probably a good measure for how many users you have.
.
Then last and definitely not least, I want to thank my team. I am very proud of what we did in the past years. And at this point, I also would provide excuse to the team because the group picture is not complete. We are missing a couple of people, but my colleague, he shows by ‑ Stalon and that is the head of Falon on the body of Stephen. And if that is your thing and you would like to get photo shopped into the next group picture, you might be lucky, because we are hiring. At the moment, we are looking for a full stack software engineer with an affinity for backend design and data science. And with that, I think I am going to come to the end of my presentation.
.
I added some references so that you can look up and fact‑check the historical data that are going to put in there, and I would be happy to get questions.

NINA BARGISEN: Thank you so much, Christian. There is one question in the Q&A, and remember, please, folks, to add your questions in the Q&A or to ask for the microphone.
.
But, from Alexander Isavnin: "What about the quality of the historical date from 2012 and earlier, what do you feel about those?"

CHRISTIAN TEUSCHEL: I think that's a very good question. My answer right now is, with the new focus that we're going to have, we will work on that. For the past two years, we apparently had to put ‑‑ we had to make a selection because we're a small team, and that focus was on the user interface, that is right now done, and right now we are going to turn our efforts towards the data API and also the infrastructure. So it basically means that we will work more on data quality and we also will add more datasets and also hopefully, if the data is available, going back further than 2012. And there was also a point that I was ‑‑ I think I was talking with Alexander the previous RIPE meeting.

NINA BARGISEN: Thank you. Ivan Beveridge is asking: With respect to RIPEstat being stat.ripe.net, it might be useful to have a silent TCP and HTTPS... so what do you think about that?

CHRISTIAN TEUSCHEL: That's a very good point. And I have to say that it was also a bit in the rush of the moment. We worked up till Tuesday when we released, and for everyone who has big releases like that ahead, there are many things that can go wrong. We wanted to play safe and first of all not put a per man and redirect from the route into the, for example, the mobile app, because if you are going to look right now on the landscape of the interfaces that we have, we have a new user interface, we have an old user interface, because it's very important for us to have that and offer that for people because we know that RIPEstat is being used by many people in a production environment and quick exchanges are not definitely not welcome there. So, as we are going to go along right now and in the next few weeks, we will work more on that and then also harmonise the redirects. So, I have to say that we will work on that, probably not this week but from next week on, we hopefully will have a solution for that.

NINA BARGISEN: Okay. Cool. Thank you. And the last question is Carlos Martinez ‑‑ no, it wasn't the last one, but there are two questions now in the queue and we will end it with that.

So, Carlos Martinez from LACNIC is basically just thanking RIPE and thanking you for the awesome work that you have done. So, hear, hear, I concur to that one.
.
And same thing from Massimo Candela from NTT, thank you for the update and thanks for the birthday wishes.
.
Happy birthday, Massimo!
.
All right. Thank you very much, Christian.
.
So we are ready to continue with Robert. So, Robert, you are here and you will share your screen, you know what to do and I'll just shut up and go for it.

ROBERT KISTELEKI: Here I go. Welcome everyone. I am Robert Kisteleki, I am responsible for the RIPE NCC R&D activities, so RIPEstat, RIPE Atlas, research and so on, and I am here as usual to give you an update on what's been going on recently in this space.
.
First of all, research.
.
I always have a slide about what kind of articles we published recently, and you can see a current list of this. I would like to particularly highlight the first one, which is hot off the press. We added more datasets to the Google BigQuery engine, which is also covering now RIPE Atlas datasets. Please feel free to explore. It is much more useful now than it was about half a year ago when we launched, but also the other articles can be of importance and interest to people who are interested in Internet shutdowns and data analysis and other statistics.
.
I would also like to mention that it's not only exclusive to the RIPE NCC and definitely not to the R&D team to publish articles on Labs, so other researchers have also published a lot of articles there using our data and other datasets as well.
.
RIPE Atlas.
.
Recent developments. You may have noticed that RIPE Atlas launched the new user interface, which is highly similar to the one that RIPEstat launched recently, but also other RIPE NCC projects are using it now, like the RIPE database just switched last week or the week before. This new UI is faster, better and we would like to think that it's going to work nicer for the people that are actually interested in interacting with the system on the UI side.
.
We also introduced the pro birthday gifts, that was around the last RIPE meeting, and, since then, I have seen many people thanking us and thanking themselves and the others to actually have these birthday gifts. It's really nice when your probe is up for years and year and years and these gifts are basically credits proportional to the uptime of your probe.
.
As you can imagine, there is always a lot of work on the infrastructure. We always have questions like, why is this probe involved in the measurement? Why isn't that probe involved in the measurement? Why is my probe down? Why is it up? What does this thing show? Is it broken? Is it not? There is a lot of energy spent on answering and improving on those. And we are working on another set of hardware probes.
.
There were some social activities. These were mostly around the birthday. So around the last RIPE meeting, before and after, Vesna organised a software probe deploython, in which I think we got something like 30 or 40 probes during the day from enthusiastic people from all over the world, which is really nice to see.
.
Then we are working on streamline versions, I should say, for sponsoring Ripe Atlas, if you are so inclined, it would be much easier to do so. Notification frameworks for up and down probes and everything else that's happening in the system and also there is a collaborative work with other services about trying to synchronise how these services are promoted and presented and what are the introductions to these services, so you can expect a lot of changes on this soon.
.
On RIPEstat. Well, I'm not going to tell you anything because Christian basically said all the words that needed to be said. Thank you, Christian, for that.
.
There is one more thing. We published a couple of ideas from our own team, where we thought, well, these are good ideas, but we just don't have the time or the energy to actually get to them, but some of the people out there, especially in the academia, might pick up on those and implement them or maybe something else or maybe similar things, and if they want to work together with us on these, then we will be really, really happy. This is a link to the RIPE Labs article; these things could be interesting to you.
.
And then one more thing. Christian also said that we are hiring full stack developers, but we're also looking for front end developers, so I was thinking that we could take this opportunity to pitch this. Please check out our careers page which is generally for the RIPE NCC. There are other jobs as well, but, in particular, the R&D team are looking for front end and full stack developers. If you are interested, let us know. And that's it. I think I fitted in my time frame.

BRIAN TRAMMELL: That was amazing. That was actually, I think, the second most amazing tools update ‑‑ the most amazing one was the one where we told you four minutes before you were supposed to do it and you were on vacation. Thank you very much.
.
I think we have a negative 7 seconds for questions. So, if there is like a question that fits into the negative 7 seconds, please ask it, otherwise you know where to find Robert.
.
Again, thank you very much. Thank you to all of our presenters. I think one of the bits of input we'll look at here is maybe to try and get more than an hour the next time. This was a relatively fast‑paced agenda and we had a couple of presentations that we had to roll over to the next time, so we'll look at that when we put the agenda together the next time.
.
Thank you very much for coming to MAT Working Group and, with that, we will let you off to the coffee/other beverage break, and we will see you at the next one, whether that be virtual or not. Thanks a lot. Have a good day.

NINA BARGISEN: Thank you. Bye.
.
(Coffee break)