RIPE 82

Daily Archives

DNS Working Group session
.
RIPE 82
.
19 May 2021
.
14:30 (UTC plus 2)

SHANE KERR: Hello everyone. Hopefully you can hear me. May name is Shane, I am one of the three co‑chairs for the RIPE DNS Working Group along with Joao and David. I hope we have an interesting session lined up for you today. It's good to virtually see you all.
.
Let's start off going ahead through our agenda. We're just going to do a little bit of agenda bashing, which we are doing now.
.
We will tend to have a discussion of resolver centrality, that is one one of our co‑chairs, and his colleague, Geoff Huston at APNIC. We will get a quick update from the RIPE NCC about the DNS that it operates. We are going to have an overview of CDNS and CDNSKEY. And we're going to end up with a discussion of a proposed EU directive which has implications for DNS operators in Europe.
.
So so far as admin trivia, we don't have a whole lot to discuss on our end. The DNS Working Group had been doing a series of virtual sessions which we were very happy with, we don't have any more planned right now. If you have a topic that you want to discuss, you can contact us and we'll see if it makes sense and set up a session, it's pretty lightweight and easy to do, and I have been happy with the results of those so far. I think we have had a lot of interesting discussions and presentations, but we're basically done with that right now. If you have something you want to discuss, we're open to it.
.
Other than that, I think that's about it. Why don't we go ahead and I'll pass the session over to Joao.

JOAO DAMAS: The first session is going to be given by Geoff, but he chose to prerecord, given that it's a bit late on his side of the world and he will be a bit more awake during the day. So here we go.

GEOFF HUSTON: My name is Geoff Huston, I am with APNIC, I am the chief scientist. I'd like to take this opportunity to report today on some measurement work that I have undertaken with Joao Damas on the issue of resolver centrality in the DNS. Now, I would normally have done this live, but it gets pretty late at night, and I need my beauty sleep, so I hope you can forgive me for rerecording this, and I'll certainly be around to answer any questions that you might have when this is replayed.
.
So without further ado, let me share my screen and get into this.
.
So, why pick on the DNS to look at centrality? And the reason is, the DNS is the window on the Internet soul. Everyone and everything uses the DNS as the precursor to almost everything it does, and, if you think about it, if just one entity controlled the DNS, then, to all practical purposes, they don't just control the DNS; they actually control all of the reachable Internet, everything you can name.
.
So, the theme of to what extent the Internet is at risk from such degree of centralised control is certainly a useful topic of conversation. So that's why we're picking on it.
.
In this presentation, I'll very quickly go through what's the problem? What does it mean? How do we measure it? What do we measure? And what we actually say about this.
.
Firstly, centrality. It's a current topic of conversation. The Internet was certainly born in a flurry of entrepreneurial activity. Every single country saw tens, even hundreds of competing Internet service providers. Everyone operated little bits of the infrastructure and it was a highly vibrant, highly competitive space. Those times are over. Fewer and fewer entities now operate the Internet's infrastructure. So what we are seeing is massive aggregation.
.
But the real question is: Is that a problem? Does this sort of lack of competition actually create woes and distortions that ultimately are to the detriment of the consumers.
.
Adam Smith said, you know, competition is that invisible hand that guides the market. Innovation creates more efficient production, more efficient production benefits consumers, it benefits the producer from having a cheaper good that is awarded with greater market share. So competition guides providers to innovate and be efficient.
.
If the market consolidates and distorts to the level that competition is reduced, you actually get detrimental effects. Without that competitive pressure the incumbents could create barriers to entry against everyone else. They can consolidate their position and that incumbent position would reduce pressure on competition, Telia allows them to be less efficient, and to actually stop innovating, and consumers end up paying a massive premium, and if I'm describing the telephone network of the 1970s and '80s, you'd probably be right. At some point, once you eliminate all forms of competition, the resultant monopolies, that massive aggregation, is its own huge distortion and problem. So, that's a reason why this is a useful topic to think about.
.
Now, when we turn our focus to the DNS and talk about consolidation, certainly we have been there before. For many years, the BIND software was almost the only software out there, it was the de facto monopoly service but for DNS software. And at that time, you know, every recursive resolver, every authoritative server simply rand BIND software. That's changed a lot and in the last sort of ten years or so. There are enormous number of platforms, some very large, some less so, but certainly BIND is no longer the only player in this game. So if we're not going to find consolidation in the software, why else might we find it. Name Registration Services, we are already seeing some of the registrars buy up some of the others and it seems that that market is aggregating. Name hosting, certainly again, the same kind of picture that some of the folk that host names for you have been buying each other and now there are smaller number of larger providers in that market space.
.
And, of course, name resolution services.
.
Now, I'm not going to look at all three, there is no time, and measurement is difficult on some of these in any case, and in trying to measure their effects is even harder, so let's focus. And what we're going to focus on is the recursive resolver market.
.
Now, by and large, no one ever built these separately for a long time. DNS resolution was part of the job of your Internet service provider. You get access, you get the DNS. You know, the names of resolvers as well as your IP address was all part of traditional DHCP or whatever protocol you were using to actually load you up and connect you to the Internet.
.
Now, if it's bundled in the ISP business it's already bundled. Because with ISPs getting fewer and larger then the market itself is getting more and more concentrated, simply as a natural result of that. And so the question is not whether the market in recursive resolvers is aggregating do you go to the ISPs themselves aggregating, but is it more than that. Are we aggregating over and above that level of ISP concentration and if so, where might we see it?
.
So the place to look at is the dedicated DNS resolvers that around bundled as part of an ISP service operation and let's quickly look at the rise of these open DNS resolvers.
.
There are around 6 million of these operating today if you believe it, and they are probably right. But most of these aren't deliberate. Most of these is errant consumer privacy equipment. That's just simply answers on both sides. Woops, you are only meant to answer on the inside, guys.
.
Other things are actually quite deliberate and there are a smaller number of folk that deliberately ran open resolvers. I am not sure when all this started but an early example of this was actually in BBN planet in the mid‑90s which operated a service at 4.2.2.2 which was open to anyone to use. And that had some degree of prop large, but it was really when it was combined with Anycast and so a single service address could be distributed over a large number of platforms all around the Internet that actually made this feasible as an Internet wide service. And one of the earliest folk to actually take that on and run with if was the open DNS project, a dedicated DNS resolution service that had some scaled‑up infrastructure. But the picture changed dramatically when Google entered the market with that public DNS model. And so now, you know, these open DNS resolvers are certainly part of the infrastructure of DNS. That we were not charting their rise, we are charting how central they are, how much of the market share have they aggregated.

The question is, what is their market share? Now, when I say use, I don't necessarily mean I tapped in an address on my device and I am using that because I changed things. Your ISP might have done it for you and so no matter how, we're going to talk about what proportion of users have their queries sent to particular open DNS resolvers and does that constitute aggregation in this market?
.
So, how are we going to measure this? I have done this a number of times using Google add so I won't go into this in much detail. But, in essence, we generate around 10 million ad impressions per day and what those ads do is fetch a URL. The URL using a unique DNS name and so every fetch needs a resolution. So we get a feed of 10 million new names being resolved from all offer the Internet everyday.

Now, from that, we need to map this. I can't tell what the user is doing in an ad that we run the only authoritative servers for that domain name. So, however they manage to resolve, we see the query. We see the query from not the client's address, from the recursive resolver.
.
Now, I don't know what the source address is, but I do know the worker IP address that is being used by the recursive resolver to ask this authoritative server and we need to map, if you will, the presentation address into worker addresses that are used.
.
It's also useful to understand how the DNS manges queries because it's not a case query in, query out. No, no, no, no. And we might also want to think about what happens when you have got multiple entries in your configuration and set to resolve dotcom for its equivalent. How were those lists used in DNS resolution services?
.
So, what worker addresses are used by open DNS resolvers? A good way to answer that question is actually with RIPE Atlas and we do periodic sweeps using Atlas probes to actually go to known resolvers and look at the worker addresses that came back to authoritative servers to reveal those maps. Some folk actually published that data, but not everyone. So we certainly need to find that out. So we're trying to map as it shows here the service address to the various engine addresses that are used.
.
Query in/query out? No. When we put in a unique name at the place level and count the number of queries that appear at our authoritative server, the average is 3.4. Now, I only look at the first 30 seconds. I'm not ‑‑ after that it's not clear why the DNS is still echoing and the majority of the queries happen relatively quickly inside around 8 seconds in our tests, but interestingly, across 30 seconds, some lucky person was using a resolution infrastructure that managed to generate 1,800 queries, just in 30 seconds.
.
Good prize.
.
And how many resolvers were used?
.
The average number of unique IP addresses that ask us per name is not one, it's actually two. Now, again, what we generally see is 1, 2, 3 or 4, but it goes up to 94 over 30 seconds. So, it's not that a question goes to a resolver and that resolver then sends that question to a worker engine and that's the end of it there is a certain amount of splaying of those queries cross of number of engines and as it tries to resolve. The other thing we want to understand is, what is the difference between the first resolver and the full set. One of the ways to tease this out is to look at resolution queries that always result in ascertain failed response.
.
So what happens in ServFail, you try the next resolver and try and try and keep on trying till you get bothered because you are never going to get an answer other than ServFail, and the DNS is nothing if not persistent in an objects successively compulsive manner.

So when we now set up exactly the same thing but now against an authoritative server that only answers ServFail, this persistence comes through. In 30 seconds, the average query count is 36 .5 queries. Okay. There is something in Taiwan that managed in 30 seconds to generate 292,000 queries. Well done! And how about the number of resolvers? Well it blows up from 2 to just around a little under 9 on average. But again, in Taiwan, we found 1,348 unique IP addresses querying us all inside 30 seconds. V4 and IPv6.
.
Whatever they are doing, there are just having too much fun.
.
So, let's now look at this data on an aggregate sense. So, if we take a day's worth of data, 10 million odd queries, we saw 140,000 visible recursive resolver IP addresses. But interestingly, just 150 of them, just 150 ‑‑ 0.1%, accounts for 20% of all users, and 1,500 of them, 1%, accounts for 50% of all users.
.
If we take it up to 10,000 resolvers, 90% of all users. So, in some ways, this is a heavily aggregated market. But I am using individual resolver addresses and that's misleading, because some of the big server pharms actually use an awful lot of addresses for one logical service. So now let's aggregate this down and look at Google as a single service, all the addresses they use.

Look at CloudFlare as a single service and where I don't have data on open DNS resolvers, let's group it by autonomous system number, by AS number. So then inside some of the large Chinese ISPs, it will all come out at one.
.
Now, those 140,000 become 14,600 visible resolvers using that metric, but it's not resolvers, it's almost logical resolvers. 15 of them serve 50% of all the users. 250 of them serve 90% of all users. And so the question is, is this what we mean by centralisation? Is the DNS already heavily aggregated?
.
Maybe we should break it down to distinguish between open resolvers, resolvers my ISP has provided for me and resolvers that are in the same country but not necessarily the same AS number and everything else.
.
This is what we see now when we look at it day by day, and this is data across around a nine‑month period. What we see when we do this in the first resolver is that 70% of users is the same resolver as their network. 17% stick within the same country, and 15% use Google's 8.8.8 service, and everyone else is kind of bits at the bottom.
.
What if we extend that by ServFail and look at all the resolvers that people might use? For the same ISP, it's still around 70%. Google ‑ and I am sorry I switched the colours on you ‑ Google doubles, that another 15% of users, taking the total to 29%, have Google software somewhere on their search list, Google's resolver address. So it's now a little under 30%, 29% of folk might use Google, it's certainly in their lists, and yes for the same country, it's just under 20% ‑‑ 19%.
.
So, let's now look more closely at Google itself. Country by country, in each country how much is Google used? Africa is using Google a lot, Somalia, Nigeria. In the Middle East, in Iran, there is extensive use of Google. The greener it is, the greater the density of ‑‑ proportion of population in that national community that sends their queries to Google's public DNS.
.
Let's invert that and say, well, of the total population that use Google ‑ a billion people, probably a bit more ‑ where are they? And interestingly, a huge proportion are located in the sub‑continent of India, and in fact what we actually find is it that the largest pool is India, with 19% of Google's DNS users, and we also see major populations that use Google in China, the US, Nigeria, Brazil and Iran, each of which have around 4 to 6% of the total use of Google.
.
What about the next largest open resolver, CloudFlare's 1.1.1.1 service? Heavy use in Turkmenistan, Iran, Niger and the Cameroon and the Congo, but not so much elsewhere. And again, what we are finding there is that CloudFlare's market share, around 3.5%, around a 10th of that of Google and to some extent it is not as widely used by any particular metric, but the level at which it is used is in, you know, these developing economies, in some sense, rather than other more established Internet areas.

One interesting case I'd like to highlight here is Iran, where one of the major ISPs in Iran, MCCI, actually doesn't use a single open DNS resolver, it very generously sends queries to all of them all of the time. I don't know if it's designed to confuse folk or to try and sort the take a vote of 5 out of 7 or whatever, but it's certainly one of the odder cases where we see a deliberate splaying of queries and doing the queries in multiple places at the same time as part of the ISP's operation. It's not a user thing, it's the ISP itself.
.
And so the next kind of question here about the centrality argument is making the choice. Is it you and I playing with the config on the ISP saying use my resolver but the resolver is actually just a forwarder? And here is an example, and it's one of many a mobile phone operating in Vietnam where the Google use inside that network, 86% of users, send their queries to Google. That can only happen when the ISP itself is doing the forwarding statement. The users haven't changed anything. They are not deliberately divorcing their DNS from their ISP. They are just being carried along with them. And so what we actually find is that this is not something that users are making choice of; it's actually where ISPs are going.
.
So when we talk about resolver centrality, are we really talking about a shift into these small clique of OpenDNS resolvers? Well, only if the small clique is one, because, quite frankly, the only sort of the major open resolver out there at this point from the numbers is Google's public DNS. And it's not the users configuring their services, it's the ISP.
.
We also find, interestingly, that the weekday use of Google is higher than weekend. And so enterprise customers of ISPs tend to be, if you will, more willing to use a Dutch ISP in your DNS resolution service than the ISP that hosts those enterprise activities. And so enterprise customers appear to have a major factor here.
.
Is this something that's happened overnight? No, It's something that happens quite slowly, it's a slow and gradual trend into this, and the real issue is, is this a concern? Is this excessive market control in the hands of a small set of providers? Well, no, not really, and the more folk, including Google, that do DNSSEC, the harder it is for operators of DNSSEC validating resolvers to disdistort the answer. They are a faithful reproduction of what is in the DNS in any case because DNSSEC will tell you that.
.
So far as I can see, the market is not distorting. And in that sense, there is not that much to worry about. Or not? Don't forget that 80% of the platforms out there in the mobile market use Android, and about a similar number of browsers are Chrome. And so it's not necessarily the recursive resolver that's the issue, but it's that shift of DNS function into the application realms that I think is actually a far greater threat for the current model of the DNS, and the threat is not about aggregation. Oddly enough, it's a threat to a common single infrastructure, it's a threat to the cohesion of the DNS, because once application realms chart their own destiny, then I think the entire conversation about the DNS will necessarily be different.
.
Thank you very much.

JOAO DAMAS: All right. Geoff is actually online. And there is one person in the Q&A panel, I'll read out ‑‑ "Are you saying that the ISP is providing 8.8.8.8 or 1.1.1.1 to the user of ...... corporation or are you saying that the ISP is providing an ISP IP and the ISP is proxying the query of public resolver."
.
Do you want to answer that, Geoff, since I see you there?

GEOFF HUSTON: We cannot see inside the DNS, but our assumption is that, instead of actually operating a full recursive resolver, which we expect would be the default, the ISP is taking the easy way out and doing a very small engine that just simply has a single forwarder statement and all the queries it receives incoming from its clients it just forwards off to Google. That's the default. I pointed out that one large provider in Iran sends the incoming queries from its customers to seven or eight of the popular DNS resolvers at once. It's kind of in parallel forwarding, I am not sure why they do this.
.
So it's not sort of anything tricky here. I think it is just a really simple forwarder statement, that's certainly what it looks like. Thanks.

JOAO DAMAS: All right. And the second question is from Christian: What can can done to hinder further centralisation?

GEOFF HUSTON: Well, the point is, in some ways, it's not that everyone is switching to Google per se; it's actually ISPs themselves are getting larger and larger. And it actually covers an outcome because the world has gone mobile. And if you look at it in every single country because of the way spectrum is being allocated, every single country, large or small, pretty much has had a maximum of three major ISPs, and if you are a very big country like India or China, that means that those ISPs are awesomely large. Now, that is reflected in the numbers. So, it's not that the DNS is being centralised per se in terms of resolution services. It's more that the combination of the way spectrum is being allocated to a small number of massive ISPs and the way those ISPs do DNS, that we see actually the sort of concentration in the DNS which is a reflection of concentration in the ISPs.
.
Don't forget, there is no financial incentive, other than surveillance, to actually run a recursive resolver. You can't lie with the answers any more. DNSSEC made that impossible. And NX domain substitution which sort of gave you potential revenue by turning 'no such domain' into a referral to a search list, has been so excessively frowned upon that folk don't do it any more.
.
In some ways, the incentives to run over resolvers I think are relatively false, and so it only happens when you have got a particular angle to do so, and the only folk who really have a search engine they want to protect is Google. And so Google is running the DNS not for any other purpose other than to defend its search market share, which is kind of logical. And so in that respect, I actually don't think the DNS itself, in terms of the resolution market, is getting skewed at all. The underlying issues around centrality are deeper and a little bit more insidious. One is spectrum and the other one that I hinted at is this move of almost all aspects of the Internet away from the communications platform, away from the lower levels, and up into the application, and what should scare anybody is the massive market share of Android and the similarly massive market share of Chrome.
.
And if Chrome sort of takes the ball and runs away with it, that is then I think the ultimate piece of centrality, because that's going to take almost 80% of users with it, and that's something to truly worry about. I'm not sure if I answered your question or not. It kind of led onward, but thank you anyway, it's a good question.

JOAO DAMAS: There is one more comment, but since there is no question, we need that to move on because of time, I'll pass the comment on from Andrew Campling and we can move right along to our next talk. And that will be ‑‑

SHANE KERR: Next is the RIPE NCC DNS update by Anand.

ANAND BUDDHDEV: Hello. Good afternoon to you all from a very grey and rainy Amsterdam. Welcome to the RIPE 82 DNS Working Group RIPE NCC update.
.
This afternoon, I'll be speaking briefly about some of the stuff that we have been doing and some of our upcoming plans in the coming months.
.
So, first up, I would like to talk about CDS for reverse DNS. So we have talked about this at previous RIPE meetings and a CDS or CDNSKEY is the popular name for a pair of RFCs that define how a parent zone operator can detect key rollovers in child zones and updates the parent zone automatically. So the RIPE NCC community asked us to implement this for the reverse DNS zones that the RIPE NCC operates. And I am happy to report that we are live since the 25th March this year.
.
What we have in place is some code written in Python by Ondrej Caletka, and once a day at 7 UTC, there is a multi‑threated Python scanner that goes away and scans all the secure delegations in all the reverse zones, and detects CDS records for ones that might be in a state of key rollover.
.
It then produces a bunch of domain object updates. These domain object updates are then sent into the RIPE database by an update script, and a few minutes later the provisions kicks in and the DS records appear in the DNS.
.
At the moment, what we have is a limit of 100 objects that may be updated automatically and the reason we did this is to observe the update process. We didn't want a situation where there might be a huge update because of some kind of bug or issue and break DNS for lots of people. So if there are more than 100 domain objects in the queue, then the operators get an alert, or a ticket about this, and we can investigate and update things by hand.
.
We haven't had a situation like that so far, but we will keep this limit in place for a while longer.
.
I would like to point out, as we have also said it before, that this process does not allow for any kind of bootstrap, so if you have newly signed your reverse DNS zones and you don't have DS records in the parent zone, our process will not automatically pick up the CDS records and update the parent zone.
.
This bootstrap requires a little bit more state, and is a little bit more complex to do. So we decided not to do it, at least not in the first phase.
.
It's something we could consider if there is a demand for it, but for now we're not allowing bootstrap.
.
But of course if an operator wants to go from secure to insecure by deleting their DS record, they can still signal this via a special kind of CDS record and this is allowed, because we can of course validate the CDS record and thereby understand the operator's intention.
.
Here is a little chart showing you some numbers. From the 12th April ‑‑ so this is just a little bit after we went live ‑‑ until the 16th ‑‑ which was a few days ago ‑‑ and this chart shows you the number of domain objects updated on a daily basis. So, mostly it's just 0 or 1 update. We have one user, which is Ondrej himself, who frequently rolls his keys, and so his zone gets updated. But in between we also see some more updates in there. You will see some spikes at 41, 40:35 and this is when a particular operator, for example, rolls the keys in all their reverse DNS zones, and then we get a batch of updates.
.
As I mentioned earlier, we have a limit of 100, but this limit has not yet been triggered.
.
I'll move onto the operations of k‑root. The RIPE NCC continues to successfully operate one of the root name servers which is k‑root. Currently, we have 83 active sites and a total of 93 servers in these sites. This is because some sites have multiple servers behind the router. Most of the other sites are single Dell servers which are DNS in a box and do the BGP service themselves.
.
At the core sites that we have, we have five core sites for k‑root. We have been doing regular maintenance replacing the routers and the servers and this work continues as hardware life cycle dictates. We have also been upgrading some of the ports of some of the core sites from 1 gigabit to 10 gigabit and we will continue this in the coming months.
.
Another thing to note is that DITL took place earlier this year, that stands for 'A Day in the Life of the Internet', and this is when various DNS operators on the Internet submit PCAP files containing queries and perhaps responses, DNS queries and responses, for a period of 50 hours, and this data is stored by DNS quashing and is available so researchers who might want to do some kind of research with DNS. So, this year, k‑root participated, and from 83 of our servers, we uploaded 50 hours worth of PCAP files into the DNS‑OARC
.
I'll move over to ALT DNS, this is the RIPE NCC's second Anycast DNS service and this is where we provide service to ripe.net as well as all the reverse DNS zones that the RIPE NCC operates. We have three counter‑sites in Amsterdam, London and Stockholm and then we have three hosted sites in Vienna, Rome and Oslo.
.
Just last week, the Amsterdam site has been upgraded to a 10 gigabit network with a new router in place and we also plan to do some upgrades to London and Stockholm in the coming weeks and months.
.
One of the interesting things I would like to mention here is that we want to expand the footprint the ALT DNS Anycast DNS network and we have an application for this ‑ well, we have an application that manges the hosted k‑root applications that we get in, but we are now expanding that to cover ALT DNS, and as soon as it is ready we will be accepting applications to host instances of this service, and the requirements for the hosts, as in the hardware, the bandwidth, these will be published very soon.
.
Next up, I would like to talk about DNSSEC algorithm rollover. So the RIPE NCC has been assigning its DNS zones with algorithm 8, and we are following recommendations from the latest RFCs which recommend Algorithm 13 now. We talked last year about doing this algorithm rollover in 2021, and this is what we're busy with, so testing is already in progress. We have our Knot DNS signer and we have a test zone that we're going to roll the algorithm of, and we will check that all the steps are performed correctly.
.
We will also be querying all the major resolvers to check that they can validate the records in this zone.
.
We are not actually expecting any obstacles or issues because algorithm rollover has been performed previously. We rolled our algorithm from 5 to 8, and there was great deal of experience then. We also wrote a RIPE Labs article about this but many people have also performed algorithm rollovers so we don't expect any issues. We are also aware that there is widespread support for Algorithm 13 in validators, so we don't expect any issues there either.
.
So just soon after RIPE 82 we will announce the dates when we will be starting the rollover of our ‑‑ of the algorithm of all our zones.
.
And that brings me to the end of my short presentation. Thank you for listening, and if you have any questions, please ask away. Feel free to also send e‑mails to my e‑mail address if you want to discuss anything offline.
.
Thank you.

SHANE KERR: Thank you, Anand. That was good and informative. Unfortunately, I don't think ‑‑ we're running a little bit short on time and I don't think we're going to be able to take questions live. So, I'd like to ask ‑‑ I see there is already one question in the Q&A. Maybe you can answer Morris Mueller in the chat or contact him directly.
.
Sorry about that.

ANAND BUDDHDEV: Okay.

SHANE KERR: Sorry. Great. So, let's move along to our next mention which is on deployment of the CDS and CDNSKEY by Ondrej Caletka.

ONDREJ CALETKA: Hello, I just share my slides. Hello everybody. My name is Ondrej. I work for the RIPE NCC, and I was asked to give an overview of the CDS/CDNS Key updates ecosystem. So, how does it work, whether this technology is still working? Because as you heard from Anand just recently, we have now deployed it for the DNS zones that are managed by the RIPE NCC, so you can use it not only for reverse zones but actually also for ENUM, if you know what ENUM is. It's part of the RIPE database, just like reverse zones.
.
To be on the same page in DNS and DNSSEC, every zone is an island. It has its own signatures, its own public key, and what makes that part of the DNS zone trusted is the delegation from the parent zone, which is made by the DS record. The DS record is a particle hash function hashing over the public DNSKEY and also zone name. So even if you share a key between zones, you still have to have different DS records for different zone names.
.
How do you usually update it? There is usually two standard ways to do it. The first one is to submit DS records as a child if you want to change or submit first DS records, you just submit them directly, either using something called EPP, Extension Provisions Protocol, which is sort of standard for registration of DNS, so if you are a registrar and have a contract with a registry, you usually use EPP to talk to the registry.
.
If you are not a registrar, then you actually have to e‑mail the ‑‑ e‑mail your registrar to do it because most registrars don't even have web interface or something like API that you could use to submit DS records. So it's pretty, again, usually manual work.
.
And if you are your own parent, what you can do is you can make a zone note which is a text file with DNS resource records, and this text file you just somehow include into the text ‑‑ into the zone file of the parent zone, this is how you make it the easiest way.
.
But, there are some registries that actually prefer a slightly more complicated way, and it is that you actually submit to them only the DNSKEY, the public key for your zone, and they will do the DS calculation themselves.
.
This is the only option for registries like .eu or .cz and also many others, but I would say slight minority of the registries.
.
The reason for this is basically what I said in the previous slide, mostly to facilitate key sharing, which is very, very good advantage for, like, web hosting providers which host thousands of zones and they want to deploy the DNSSEC on all of them, it's much easier from the operational point of view to have just one pair of keys instead of one key per zone, and it's not very strictly ‑‑ it's not very bad for security either, so it's not a big deal.
.
And the other thing is that, is that with submitting DNSKEY, it's actually the registry which decides what kind of hashing algorithm will be used because there is ‑‑ there are four hashing algorithms supported in DS records, there are SHA1, SHA256, GOST and another SHA, so right now we are in the process of taking out SHA1, so probably if you don't want to introduce SHA1 in your registry and you accept DS records directly, then you have no controls over what the child's are submitting to you so this maybe another valid reason why you accept DNSKEY. And that is why the update system is ‑‑ has that very complicated name of CDS/CDNSKEY because the idea here is basically the child ‑‑ that's the C in the name ‑‑ is providing sort of indent signalling in the DNS itself for the DS record, it should be the parent but it's not only signalling the DS record in CDS, but may also signal the DNSKEY record by using CDNSKEY. So I looked up how this is actually specified. I found out that if you are a child and you want to publish it, you should, in the RFC term, should publish both, and if you do that they must match. And I found out that this is mostly like a fulfilled by all the implementation I am aware of.
.
So that's pretty good.
.
On the parent side it depends and usually parents can do either one or the other, it depends on what the parent is willing to work with.
.
This protocol works like that, it's for signalling the change so it means that if there is no CDS or CDNSKEY, that means no change and everything should stay like it is. And if you want something changed, you should publish it and then the parent should do the change.
.
And there is also another option to publish something that will make the DS disappear from the parent, if you want to switch from DNSSEC secured to insecure.
.
There is also option to bootstrap from insecure to secure. There are ‑‑ there is a few options how to do this, but the most common and the only one that I am aware of is that actually used is to query all the authoratitive servers to find out whether the CDS answer is constant and consistent among them for such some longer time period, so you sort of think that if ‑‑ the answer is consistent for a few days from all the servers you consider it valid even though you have not proof.
.
So, this is the list of registries I made up from mostly from my memory and from what I was able to find out.
.
So first registry that started it was .cz in 2017, I think. They do ‑‑ they use CDNSKEY because they use DNS keys everywhere in the registration system. They support deleting DNSSEC or going to insecure and they also support bootstrapping from insecure by observing the same CDNSKEY record for seven days. The same registration software is used in Costa Rica's registry and Costa Rica's registry deployed this scanning as well, so even though I haven't found any information on the website of nic.cr, I am pretty confident that it works exactly the same.
.
The second registry that deployed it is Switch for Swiss and Liechtenstein domain. Again, it's the same registry so the conditions are pretty the same. They use CDS records so they use directed yes digests and the grace period for bootstrapping from insecure is just 72 hours, and the last one ‑‑ the last two ‑‑ the last one is the RIPE NCC and before that there was .sk who started this in 2020 actually, and they ‑‑ from the information they publish, it seems very similar to the Switch with 72 hours.
.
As you can see, all of them actually support going insecure, and the RIPE NCC is the only one which does not support going bootstrapping. This is something that we may consider if we find out that it's like, worth it, because it's a little bit tricky.
.
Regarding the provider side who the make use of this signalling. The first one that offered this service was CloudFlare. I have checked recently and they support publishing both CDS and CDNSKEY and they also support deleting, so if you want to switch off DNSSEC you just click turn off DNSSEC and what happens is they will not turn it off immediately but they will immediately start signalling that you want to delete DS records and once they notice that the DS record is gone then they probably stop signing the zone.
.
I also found out about a provider called DNSimply. GoDaddy uses this signalling and also I got some rumours about Google domains which is actually a registrar, so they should be able to do these changes using the EPP, the standard way, but for some reasons, probably CDS works better for them.
.
If you want to host DNSSEC yourself, I have pretty good news because I found out that most major open source DNS server solutions support CDS very well, so not only Knot DNS, which is developed by cz.nic and they sort of like push this up, but there is also very decent support in BIND 9 and Power DNS. Knot DNS still has the only ‑‑ is the only one which supports like fully automated KSK rollovers. It's very nice to watch. There is a part of logo in the bottom of the slide, so you can see if there is a time to change the DS record, it will actually publish the serious records and then it will start probing some configured records. In my case, I configured a loop back and also the quad 8 and quad 1 resolvers. And only after all of them are consistently like showing the new DS record, it will find out that the change was successful and it will push the rollover forward. So you don't have to make any interventions.
.
For the others, as far as I know, you still have to put the outcome and saying, oh, now, the parent has changed the DS so you can continue with the process of changing keys.
.
All of them also published both CDS and CDNSKEY. So this is not an issue.
.
On the parent side, if you are the parent, either registry or registrar, or just you have some zones that are in parent/child relationship, the easiest thing to do is actually part of BIND, which is called DNSSEC CDS. This tool will help you keep the zonelets with the DS records up to date by clearing for CDS, or even for CDNSKEY records. This is the one exception of parent side where parents can actually make use of both type of records.
.
Other than that, there is also this FRED registry, as I told you already, which is open source. And so even the CDNSKEY scanning part is open source and you can try to use it for your own deployment if you are going to do some massive scannings and even the bootstrapping from insecure, it may be handy. On the other hand, the documentation is very, I would say, sparse, so you will probably have to ask cz.nic if you want to use it in a way they didn't expect.
.
So to wrap up this talk, the adoption of this system is slowly growing. Good news is that it's already supported pretty well in DNSSEC software, so I am really glad that my update of my reverse zone that I am rolling KSK every second day. It's not the only one with the updates on the RIPE database, but there are also other people that are just waiting for us to deploy the CDS scanning and they started using it and changing their keys without interventions.
.
It also seems to me that the ‑‑ this sort of signalling is nice single standard way to perform updates in all supporting registries. So even if you are a registrar with EPP access to registries, you still have, you know, there are dialects of EPP, like EPP that you talk to .cz is different from what you talk to epp.com. So there is even a benefit for those people to use this technology to have this ‑‑ to have this standard where you communicate the intentions to change the key.
.
The registries, yes, are slowly growing. Well we are still waiting for a killer feature if somebody like dotcom deployed that, it would be really, really nice. But in the scale of dotcom, like regular scanning would be a serious challenge, I would say.
.
And one more thing. I created a channel called 'CDS Updates on the channel of DNS‑OARC', so if you want to talk about CDS updates or this protocol in general, please join the chat. There is also a link for a GitHub repository I set up in the information that I just presented here.
.
So that is everything from me. And in case you have any questions...

SHANE KERR: Cool. Thanks, Ondrej, this has been really good. I actually find this really nice. The last feature of the DNS is going to make things a lot handier going forward, I hope.
.
Unfortunately, we're running really short on time, so, we're not going to be able to take any questions, but I guess you will be on the chat, so people can asks questions on the chat there.
.
Thank you. For the rest of everyone, we're going to play a presentation that was recorded earlier. It's going to be about 13 minutes, I know it's going to go way into the break and I really apologise for that, but we miscalculated our time, I am afraid. So let's see what we are here.
.
BERTH HUBERT: Hello, and welcome this this presentation on the new and fascinating EU directive on network and information security Version 2. And you may have heard about this directive. It might impinge on the way how the Internet is being run, how the route servers are being regulated and this presentation I hope to tell you a little bit about what is going on and what is good and what is bad, because despite what you might have heard, it is not all bad.
.
For full disclosure, like many of you, I actually like the European Union, so I could spend ten minutes bashing all these regulations and how stupid it is, but I actually somewhat see the point of it.
.
It also helps that I'm actually myself a government regulator so I look somewhat favourably upon regulation. But not all is lost, I am also still a big nerd and I am still the founder of PowerDNS so I am also still convinced that we know how to run the Internet ourselves without help from Brussels.

So what's the idea behind the EU network and information security directive? In short, I see it as a way of taking computers seriously. So, we have regulation on how the water supply works, how electricity works and on communications reliability and for healthcare and other systemic things, and it must be said we rely on computers in a very big way and it's not unreasonable to ask people to take information security seriously, especially because they are not doing it out of their own accord.
.
So, already in 2018, the European Union decided to create the NIS directive and specified how countries should cooperate and exchange information but it also clarified that specific companies, important companies and organisations should be regulated. And this regulation should be about how they notify the world of security breaches, but also mandates that they have certain cybersecurity measures and practices in place, and it says that there might be fines if these things are not in place.
.
Now, no one has heard of the NIS directive. It was written in 2018, and it didn't really work.
.
Some countries took it very seriously, and there is even one country that has listed every hospital as an important service provider, and there are other countries that have listed almost no one. And also, the implementation of fines in case the security is not good was also not unified very well. So the NIS directive was not a great success.
.
Meanwhile, however, the need for good information security has not gone away. So, the European Union came back with NIS 2, the return of Brussels. And in this case, they made it clear, they said look we really mean, it we really mean it when we say that we want you to do information security and take it seriously.
.
So, they have now said, look, we are not going to leave it to individual countries to decide who they will regulate or not. We are going to make a list for you and, at the same time, they have said we are going to make the list somewhat simpler.
.
And so there are only two criteria left in the NIS 2 and these are called important entities and essential entities, and you can imagine that the rules for essential entities are more stringent than for important entities. And just to be clear, that no one has any doubt, they added a list of things they are going to regulate. And this includes Internet Exchanges, country code top‑level domains, large scale authoritative servers, particle resolvers, data centre service providers, content delivery networks and route servers.
.
So, they made it very clear, Brussels is here to mess with your life, and this will apply to many of the people listening to this presentation, even if you are not in the EU. The important part of this regulation is, are you essential to the EU? And in that case, the European Union is coming for you. So will this apply to me?
.
Well, for ‑‑ sometimes it's very clear, so if you run .nl or .be or .dk then yes, this will very much apply to you. It's in there explicitly, country code, top‑level domain operators will be regulated. If you are a large scale Internet access provider, you will be regulated, but you were already regulated in that case.
.
One version of the NIS 2 directive explicitly says that all route servers will be regulated and another version says that it won't be. So that's a bit up in the air.
.
So, will you be regulated? Well, if you are a small place, if you have less than 50 employees or revenue of less than €10,000,000, then you will likely not be regulated. So that's good news because the original version of the NIS would have applied to many hobbiest labs, because they did sufficient numbers of DNS queries to be systematically important. And here it says no, the really small companies don't worry.
.
But there is an exception. It is possible that you are really small but that you are still providing a key role for society. So even if you are a small player and you are doing extremely important things, this regulation could apply to you.
.
So, they are coming for you.
.
And what does it mean? It means that if you have a security incident, you must report is pretty damn quickly to your national authority. If you are not in the EU, you must pick a country within the EU that represents your biggest place here, and then you must report the incident there.
.
The reporting stuff is easy, of course. But you must also implement security measures, and these are not very well described in the directive. You must do risk analysis and incident handling and you must have business continuity plans and, very importantly, you must also know if your suppliers have these plans, and by this I do not mean your coffee supplier, but let's say you have outsourced all your servers to Amazon, then you must be sure that Amazon also has a business continuity plan.
.
You must have a plan for vulnerability handling, disclosure, you you must do testing and audits of the effectiveness of your cybersecurity measures and you must use cryptography.
.
If you don't do these things, the EU or your national authority can send you a warning or they can order you to fix it, so let's say they find that your security is not good, they can order you to fix. And if you don't respond to that order, they can give you a fine. And some countries might even choose to start a criminal procedure against you, or they can tell you you must stop doing business because you are not doing it safely. And in an extreme measure, they may even issue a temporary ban for certain persons to take part in your management and this might sometimes of course be very welcome, but it's quite extreme. If you don't cooperate, the EU can also come on site to inspect if your security is good and they might even do random unannounced visits. So, it is quite some stuff.

It's clear that they will not do this for everyone and it's also clear that if you are only an important service provider, then this stuff will only happen after you have had a big breach.
.
If you are an essential service provider, the EU can ask, or the government, your local government can ask to see these plans, to make sure that you are ready.
.
So this is quite something, and I can understand at this point if you feel angry and want to set fire to this EU flag. Because why are these people messing with us? And you will find that even setting fire to this EU flag is not that easy, because of many EU regulations these flags actually do not combust, so it's an extremely frustrating experience.
.
How bad is this? Well, of course, no one wants to be regulated and get certified. We have no good experiences with that as technical people. It's typically bullshit. Although ISO 27,001 can also be useful.

One other thing to notice is all the things the EU demands of you, every security person is also trying to ‑‑ working very hard to get their own company to do those things. We don't get taken seriously when we ask management to have a security man in place or take security seriously, and now the EU says, well, you have to take it seriously. So, it could actually be good. Actually, if you look at the list of things that you have to do, it is quite hard to argue that you do not want to have a plan in place to deal with vulnerabilities.
.
The big question, however, is how will this work in practice? And the rule might end up being terrible and add a lot of paperwork while not actually being useful, and sadly a lot of EU regulation has ended up this way. So it is worrying, it could end up very badly.
.
NIS 2 also has some specific things to say about WHOIS, and I know this is a very controversial subject. Some people say it's a huge violation of privacy and others say everyone should be able to see it. The EU has struck some middle ground; they said,the WHOIS data has to be correct and registries have to have plans in place to actually verify that they have ‑‑ they know who owns a domain name, and they also said that if the domain name is not a natural person but a company, then you should just make that public who owns the domain name, but ‑‑ and the rest, of course, should not be public. But the EU also says that if a legitimate access seeker shows up at the registry, they must pretty quick quickly get the data they need that is associated with a domain name.
.
I have tried to figure out who a legitimate access seeker is and they have cleverly not defined it. So this probably means police and law enforcement, but it might even include universities, researchers and security companies.
.
If you look at the NIS, there are other things for ccTLDs in there, and I do recommend that you give it a good look.
.
Now, here the controversial part. The European Commission was feeling good about itself, the GDPR is nice and they said we are also going to regulate the root servers of the Internet because they are super important, and of course they are right. The root is super important for the Internet, that's why many of you take so good care of it.
.
But it sucks to have the EU regulate root servers, because the thing is, it is pretty strange that the European Union wants to do an audit on the US Department of Defence, for an example. Because, yeah, they run root servers and, in theory, the EU could show up and ask them how well are your security plans doing and can we do some random audits on the US Department of Defence.
.
It's not nice for people like ISC, for example, that want to operate root servers, but do not want to comply with EU regulations, especially since if the EU decides to regulate the root servers, many other countries might also decide they want to regulate the root and this would lead to a huge pile of regulation and that's why we invented ICANN, to not have every government to try to impose their own will on the Internet.
.
Now, the good news is that the good people of the RIPE NCC have managed to convey to the European Parliament that it would suck if the EU would try to regulate the root servers, and the European Parliament has included an amendment in its report that will actually remove the root servers from the NIS directive. So, that's a very good start, but it still has to happen. And the key reason why the root servers are not essential is that the root is essential but there are so many root servers that you can actually shut down most of them and no one would notice. Every one of the root server operators provides exactly the same servers as the other root servers, so they are not essential.
.
Now, at this point you might feel like, yeah, I am angry and I want Brussels to stay out of my servers. Well, I can agree. Brussels is going to come for your servers, and we need to make the best of it.
.
So we need to engage with governments and need to engage with Brussels. There are two things that you really should not be doing.
.
I have heard many people in the scene say, look, we are doing so well here on the Internet, we don't need any regulation. And the problem is that is, of course, true for you and for all my friends and they are all great. It is not true for the IT sector in general or for the Internet in general, it leaks like a sieve, it is not good. So it's not credible for us to say we don't need any regulation, even though we and our friends are doing a great job.
.
And the other thing is that people have said, look, I may run DNS for a million domain names and for 25 countries around the world but we're not essential. That's not important what we do, and that is not credible. We should take ourselves seriously. The work we do is important and if we mess it up there are serious consequences.
.
What we do need to make sure is that the definitions of who isn't essential service provider are good enough that you don't become an essential service provider because you run, let's say, an NTP server, unless you run 1,000 NTP servers and you are telling people they all should be using your server, then you are important.
.
So we need to make sure that the definitions are good.
.
We also need to make sure that the implementation of the general principles make sense, because the NIS directive says you must have plans for business continuity And risk analysis. Well, I can do a risk analysis and it could fit on a Post‑it note or it could fit on 50 pages and I'm not sure how big an analysis is big enough, so we should make sure that the kind of risk analysis that we all should be doing is actually good enough for the EU, and not that they demand, they say, look, we need like 500 kilos of paperwork and otherwise we don't believe it.
.
How do we get these people to make these correct choices? We can follow the RIPE NCC's lead and engage with Brussels, but also, very important, every country has a telecommunications department or ministry, they also care about this, they have a very big voice in Brussels, so if you are from an EU country and you have contacts with your government, please make sure you are in the loop because they will be relying on you to hear what is reasonable, because if we don't show up, they will only hear people from big accountancy and consultancy firms talking to them and they will say that no business continuity plan is complete unless it has 500 pages and that's not what we need.
.
So if you have any influence, please make sure that you make yourself available to these governments and you'll be surprised to learn that they actually care about your supports because they don't also want to pass ridiculous legislation.
.
Rounding up with this, I hope I have made it clear that the NIS directive is an important thing, and that it could end up being terrible, it could also end up being useful, and if we want to make it useful, the way to get there is not to tell people that they are all stupid in Brussels and that we don't need to be regulated because the Internet is perfect already, we are not going to be ‑‑ credibly be able to claim that, but we are going to be able to have a role in explaining who is essential, who is not essential and what reasonable things can be done to enhance our security.
.
Thank you.

SHANE KERR: All right. Well, I don't know if that was inspirational or not, but we are way way over time, so thank you everyone for hanging in there and listening to Berth's talk. Please feel free to go ahead and chat about it, and talk to people about it. And we can also discuss on the DNS Working Group mailing list.
.
So thank you everyone again and we will see you on the Internet. Bye.
.
(Coffee break)
.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.DNS Working Group session
.
RIPE 82
.
19 May 2021
.
14:30 (UTC plus 2)

SHANE KERR: Hello everyone. Hopefully you can hear me. May name is Shane, I am one of the three co‑chairs for the RIPE DNS Working Group along with Joao and David. I hope we have an interesting session lined up for you today. It's good to virtually see you all.
.
Let's start off going ahead through our agenda. We're just going to do a little bit of agenda bashing, which we are doing now.
.
We will tend to have a discussion of resolver centrality, that is one one of our co‑chairs, and his colleague, Geoff Huston at APNIC. We will get a quick update from the RIPE NCC about the DNS that it operates. We are going to have an overview of CDNS and CDNSKEY. And we're going to end up with a discussion of a proposed EU directive which has implications for DNS operators in Europe.
.
So so far as admin trivia, we don't have a whole lot to discuss on our end. The DNS Working Group had been doing a series of virtual sessions which we were very happy with, we don't have any more planned right now. If you have a topic that you want to discuss, you can contact us and we'll see if it makes sense and set up a session, it's pretty lightweight and easy to do, and I have been happy with the results of those so far. I think we have had a lot of interesting discussions and presentations, but we're basically done with that right now. If you have something you want to discuss, we're open to it.
.
Other than that, I think that's about it. Why don't we go ahead and I'll pass the session over to Joao.

JOAO DAMAS: The first session is going to be given by Geoff, but he chose to prerecord, given that it's a bit late on his side of the world and he will be a bit more awake during the day. So here we go.

GEOFF HUSTON: My name is Geoff Huston, I am with APNIC, I am the chief scientist. I'd like to take this opportunity to report today on some measurement work that I have undertaken with Joao Damas on the issue of resolver centrality in the DNS. Now, I would normally have done this live, but it gets pretty late at night, and I need my beauty sleep, so I hope you can forgive me for rerecording this, and I'll certainly be around to answer any questions that you might have when this is replayed.
.
So without further ado, let me share my screen and get into this.
.
So, why pick on the DNS to look at centrality? And the reason is, the DNS is the window on the Internet soul. Everyone and everything uses the DNS as the precursor to almost everything it does, and, if you think about it, if just one entity controlled the DNS, then, to all practical purposes, they don't just control the DNS; they actually control all of the reachable Internet, everything you can name.
.
So, the theme of to what extent the Internet is at risk from such degree of centralised control is certainly a useful topic of conversation. So that's why we're picking on it.
.
In this presentation, I'll very quickly go through what's the problem? What does it mean? How do we measure it? What do we measure? And what we actually say about this.
.
Firstly, centrality. It's a current topic of conversation. The Internet was certainly born in a flurry of entrepreneurial activity. Every single country saw tens, even hundreds of competing Internet service providers. Everyone operated little bits of the infrastructure and it was a highly vibrant, highly competitive space. Those times are over. Fewer and fewer entities now operate the Internet's infrastructure. So what we are seeing is massive aggregation.
.
But the real question is: Is that a problem? Does this sort of lack of competition actually create woes and distortions that ultimately are to the detriment of the consumers.
.
Adam Smith said, you know, competition is that invisible hand that guides the market. Innovation creates more efficient production, more efficient production benefits consumers, it benefits the producer from having a cheaper good that is awarded with greater market share. So competition guides providers to innovate and be efficient.
.
If the market consolidates and distorts to the level that competition is reduced, you actually get detrimental effects. Without that competitive pressure the incumbents could create barriers to entry against everyone else. They can consolidate their position and that incumbent position would reduce pressure on competition, Telia allows them to be less efficient, and to actually stop innovating, and consumers end up paying a massive premium, and if I'm describing the telephone network of the 1970s and '80s, you'd probably be right. At some point, once you eliminate all forms of competition, the resultant monopolies, that massive aggregation, is its own huge distortion and problem. So, that's a reason why this is a useful topic to think about.
.
Now, when we turn our focus to the DNS and talk about consolidation, certainly we have been there before. For many years, the BIND software was almost the only software out there, it was the de facto monopoly service but for DNS software. And at that time, you know, every recursive resolver, every authoritative server simply rand BIND software. That's changed a lot and in the last sort of ten years or so. There are enormous number of platforms, some very large, some less so, but certainly BIND is no longer the only player in this game. So if we're not going to find consolidation in the software, why else might we find it. Name Registration Services, we are already seeing some of the registrars buy up some of the others and it seems that that market is aggregating. Name hosting, certainly again, the same kind of picture that some of the folk that host names for you have been buying each other and now there are smaller number of larger providers in that market space.
.
And, of course, name resolution services.
.
Now, I'm not going to look at all three, there is no time, and measurement is difficult on some of these in any case, and in trying to measure their effects is even harder, so let's focus. And what we're going to focus on is the recursive resolver market.
.
Now, by and large, no one ever built these separately for a long time. DNS resolution was part of the job of your Internet service provider. You get access, you get the DNS. You know, the names of resolvers as well as your IP address was all part of traditional DHCP or whatever protocol you were using to actually load you up and connect you to the Internet.
.
Now, if it's bundled in the ISP business it's already bundled. Because with ISPs getting fewer and larger then the market itself is getting more and more concentrated, simply as a natural result of that. And so the question is not whether the market in recursive resolvers is aggregating do you go to the ISPs themselves aggregating, but is it more than that. Are we aggregating over and above that level of ISP concentration and if so, where might we see it?
.
So the place to look at is the dedicated DNS resolvers that around bundled as part of an ISP service operation and let's quickly look at the rise of these open DNS resolvers.
.
There are around 6 million of these operating today if you believe it, and they are probably right. But most of these aren't deliberate. Most of these is errant consumer privacy equipment. That's just simply answers on both sides. Woops, you are only meant to answer on the inside, guys.
.
Other things are actually quite deliberate and there are a smaller number of folk that deliberately ran open resolvers. I am not sure when all this started but an early example of this was actually in BBN planet in the mid‑90s which operated a service at 4.2.2.2 which was open to anyone to use. And that had some degree of prop large, but it was really when it was combined with Anycast and so a single service address could be distributed over a large number of platforms all around the Internet that actually made this feasible as an Internet wide service. And one of the earliest folk to actually take that on and run with if was the open DNS project, a dedicated DNS resolution service that had some scaled‑up infrastructure. But the picture changed dramatically when Google entered the market with that public DNS model. And so now, you know, these open DNS resolvers are certainly part of the infrastructure of DNS. That we were not charting their rise, we are charting how central they are, how much of the market share have they aggregated.

The question is, what is their market share? Now, when I say use, I don't necessarily mean I tapped in an address on my device and I am using that because I changed things. Your ISP might have done it for you and so no matter how, we're going to talk about what proportion of users have their queries sent to particular open DNS resolvers and does that constitute aggregation in this market?
.
So, how are we going to measure this? I have done this a number of times using Google add so I won't go into this in much detail. But, in essence, we generate around 10 million ad impressions per day and what those ads do is fetch a URL. The URL using a unique DNS name and so every fetch needs a resolution. So we get a feed of 10 million new names being resolved from all offer the Internet everyday.

Now, from that, we need to map this. I can't tell what the user is doing in an ad that we run the only authoritative servers for that domain name. So, however they manage to resolve, we see the query. We see the query from not the client's address, from the recursive resolver.
.
Now, I don't know what the source address is, but I do know the worker IP address that is being used by the recursive resolver to ask this authoritative server and we need to map, if you will, the presentation address into worker addresses that are used.
.
It's also useful to understand how the DNS manges queries because it's not a case query in, query out. No, no, no, no. And we might also want to think about what happens when you have got multiple entries in your configuration and set to resolve dotcom for its equivalent. How were those lists used in DNS resolution services?
.
So, what worker addresses are used by open DNS resolvers? A good way to answer that question is actually with RIPE Atlas and we do periodic sweeps using Atlas probes to actually go to known resolvers and look at the worker addresses that came back to authoritative servers to reveal those maps. Some folk actually published that data, but not everyone. So we certainly need to find that out. So we're trying to map as it shows here the service address to the various engine addresses that are used.
.
Query in/query out? No. When we put in a unique name at the place level and count the number of queries that appear at our authoritative server, the average is 3.4. Now, I only look at the first 30 seconds. I'm not ‑‑ after that it's not clear why the DNS is still echoing and the majority of the queries happen relatively quickly inside around 8 seconds in our tests, but interestingly, across 30 seconds, some lucky person was using a resolution infrastructure that managed to generate 1,800 queries, just in 30 seconds.
.
Good prize.
.
And how many resolvers were used?
.
The average number of unique IP addresses that ask us per name is not one, it's actually two. Now, again, what we generally see is 1, 2, 3 or 4, but it goes up to 94 over 30 seconds. So, it's not that a question goes to a resolver and that resolver then sends that question to a worker engine and that's the end of it there is a certain amount of splaying of those queries cross of number of engines and as it tries to resolve. The other thing we want to understand is, what is the difference between the first resolver and the full set. One of the ways to tease this out is to look at resolution queries that always result in ascertain failed response.
.
So what happens in ServFail, you try the next resolver and try and try and keep on trying till you get bothered because you are never going to get an answer other than ServFail, and the DNS is nothing if not persistent in an objects successively compulsive manner.

So when we now set up exactly the same thing but now against an authoritative server that only answers ServFail, this persistence comes through. In 30 seconds, the average query count is 36 .5 queries. Okay. There is something in Taiwan that managed in 30 seconds to generate 292,000 queries. Well done! And how about the number of resolvers? Well it blows up from 2 to just around a little under 9 on average. But again, in Taiwan, we found 1,348 unique IP addresses querying us all inside 30 seconds. V4 and IPv6.
.
Whatever they are doing, there are just having too much fun.
.
So, let's now look at this data on an aggregate sense. So, if we take a day's worth of data, 10 million odd queries, we saw 140,000 visible recursive resolver IP addresses. But interestingly, just 150 of them, just 150 ‑‑ 0.1%, accounts for 20% of all users, and 1,500 of them, 1%, accounts for 50% of all users.
.
If we take it up to 10,000 resolvers, 90% of all users. So, in some ways, this is a heavily aggregated market. But I am using individual resolver addresses and that's misleading, because some of the big server pharms actually use an awful lot of addresses for one logical service. So now let's aggregate this down and look at Google as a single service, all the addresses they use.

Look at CloudFlare as a single service and where I don't have data on open DNS resolvers, let's group it by autonomous system number, by AS number. So then inside some of the large Chinese ISPs, it will all come out at one.
.
Now, those 140,000 become 14,600 visible resolvers using that metric, but it's not resolvers, it's almost logical resolvers. 15 of them serve 50% of all the users. 250 of them serve 90% of all users. And so the question is, is this what we mean by centralisation? Is the DNS already heavily aggregated?
.
Maybe we should break it down to distinguish between open resolvers, resolvers my ISP has provided for me and resolvers that are in the same country but not necessarily the same AS number and everything else.
.
This is what we see now when we look at it day by day, and this is data across around a nine‑month period. What we see when we do this in the first resolver is that 70% of users is the same resolver as their network. 17% stick within the same country, and 15% use Google's 8.8.8 service, and everyone else is kind of bits at the bottom.
.
What if we extend that by ServFail and look at all the resolvers that people might use? For the same ISP, it's still around 70%. Google ‑ and I am sorry I switched the colours on you ‑ Google doubles, that another 15% of users, taking the total to 29%, have Google software somewhere on their search list, Google's resolver address. So it's now a little under 30%, 29% of folk might use Google, it's certainly in their lists, and yes for the same country, it's just under 20% ‑‑ 19%.
.
So, let's now look more closely at Google itself. Country by country, in each country how much is Google used? Africa is using Google a lot, Somalia, Nigeria. In the Middle East, in Iran, there is extensive use of Google. The greener it is, the greater the density of ‑‑ proportion of population in that national community that sends their queries to Google's public DNS.
.
Let's invert that and say, well, of the total population that use Google ‑ a billion people, probably a bit more ‑ where are they? And interestingly, a huge proportion are located in the sub‑continent of India, and in fact what we actually find is it that the largest pool is India, with 19% of Google's DNS users, and we also see major populations that use Google in China, the US, Nigeria, Brazil and Iran, each of which have around 4 to 6% of the total use of Google.
.
What about the next largest open resolver, CloudFlare's 1.1.1.1 service? Heavy use in Turkmenistan, Iran, Niger and the Cameroon and the Congo, but not so much elsewhere. And again, what we are finding there is that CloudFlare's market share, around 3.5%, around a 10th of that of Google and to some extent it is not as widely used by any particular metric, but the level at which it is used is in, you know, these developing economies, in some sense, rather than other more established Internet areas.

One interesting case I'd like to highlight here is Iran, where one of the major ISPs in Iran, MCCI, actually doesn't use a single open DNS resolver, it very generously sends queries to all of them all of the time. I don't know if it's designed to confuse folk or to try and sort the take a vote of 5 out of 7 or whatever, but it's certainly one of the odder cases where we see a deliberate splaying of queries and doing the queries in multiple places at the same time as part of the ISP's operation. It's not a user thing, it's the ISP itself.
.
And so the next kind of question here about the centrality argument is making the choice. Is it you and I playing with the config on the ISP saying use my resolver but the resolver is actually just a forwarder? And here is an example, and it's one of many a mobile phone operating in Vietnam where the Google use inside that network, 86% of users, send their queries to Google. That can only happen when the ISP itself is doing the forwarding statement. The users haven't changed anything. They are not deliberately divorcing their DNS from their ISP. They are just being carried along with them. And so what we actually find is that this is not something that users are making choice of; it's actually where ISPs are going.
.
So when we talk about resolver centrality, are we really talking about a shift into these small clique of OpenDNS resolvers? Well, only if the small clique is one, because, quite frankly, the only sort of the major open resolver out there at this point from the numbers is Google's public DNS. And it's not the users configuring their services, it's the ISP.
.
We also find, interestingly, that the weekday use of Google is higher than weekend. And so enterprise customers of ISPs tend to be, if you will, more willing to use a Dutch ISP in your DNS resolution service than the ISP that hosts those enterprise activities. And so enterprise customers appear to have a major factor here.
.
Is this something that's happened overnight? No, It's something that happens quite slowly, it's a slow and gradual trend into this, and the real issue is, is this a concern? Is this excessive market control in the hands of a small set of providers? Well, no, not really, and the more folk, including Google, that do DNSSEC, the harder it is for operators of DNSSEC validating resolvers to disdistort the answer. They are a faithful reproduction of what is in the DNS in any case because DNSSEC will tell you that.
.
So far as I can see, the market is not distorting. And in that sense, there is not that much to worry about. Or not? Don't forget that 80% of the platforms out there in the mobile market use Android, and about a similar number of browsers are Chrome. And so it's not necessarily the recursive resolver that's the issue, but it's that shift of DNS function into the application realms that I think is actually a far greater threat for the current model of the DNS, and the threat is not about aggregation. Oddly enough, it's a threat to a common single infrastructure, it's a threat to the cohesion of the DNS, because once application realms chart their own destiny, then I think the entire conversation about the DNS will necessarily be different.
.
Thank you very much.

JOAO DAMAS: All right. Geoff is actually online. And there is one person in the Q&A panel, I'll read out ‑‑ "Are you saying that the ISP is providing 8.8.8.8 or 1.1.1.1 to the user of ...... corporation or are you saying that the ISP is providing an ISP IP and the ISP is proxying the query of public resolver."
.
Do you want to answer that, Geoff, since I see you there?

GEOFF HUSTON: We cannot see inside the DNS, but our assumption is that, instead of actually operating a full recursive resolver, which we expect would be the default, the ISP is taking the easy way out and doing a very small engine that just simply has a single forwarder statement and all the queries it receives incoming from its clients it just forwards off to Google. That's the default. I pointed out that one large provider in Iran sends the incoming queries from its customers to seven or eight of the popular DNS resolvers at once. It's kind of in parallel forwarding, I am not sure why they do this.
.
So it's not sort of anything tricky here. I think it is just a really simple forwarder statement, that's certainly what it looks like. Thanks.

JOAO DAMAS: All right. And the second question is from Christian: What can can done to hinder further centralisation?

GEOFF HUSTON: Well, the point is, in some ways, it's not that everyone is switching to Google per se; it's actually ISPs themselves are getting larger and larger. And it actually covers an outcome because the world has gone mobile. And if you look at it in every single country because of the way spectrum is being allocated, every single country, large or small, pretty much has had a maximum of three major ISPs, and if you are a very big country like India or China, that means that those ISPs are awesomely large. Now, that is reflected in the numbers. So, it's not that the DNS is being centralised per se in terms of resolution services. It's more that the combination of the way spectrum is being allocated to a small number of massive ISPs and the way those ISPs do DNS, that we see actually the sort of concentration in the DNS which is a reflection of concentration in the ISPs.
.
Don't forget, there is no financial incentive, other than surveillance, to actually run a recursive resolver. You can't lie with the answers any more. DNSSEC made that impossible. And NX domain substitution which sort of gave you potential revenue by turning 'no such domain' into a referral to a search list, has been so excessively frowned upon that folk don't do it any more.
.
In some ways, the incentives to run over resolvers I think are relatively false, and so it only happens when you have got a particular angle to do so, and the only folk who really have a search engine they want to protect is Google. And so Google is running the DNS not for any other purpose other than to defend its search market share, which is kind of logical. And so in that respect, I actually don't think the DNS itself, in terms of the resolution market, is getting skewed at all. The underlying issues around centrality are deeper and a little bit more insidious. One is spectrum and the other one that I hinted at is this move of almost all aspects of the Internet away from the communications platform, away from the lower levels, and up into the application, and what should scare anybody is the massive market share of Android and the similarly massive market share of Chrome.
.
And if Chrome sort of takes the ball and runs away with it, that is then I think the ultimate piece of centrality, because that's going to take almost 80% of users with it, and that's something to truly worry about. I'm not sure if I answered your question or not. It kind of led onward, but thank you anyway, it's a good question.

JOAO DAMAS: There is one more comment, but since there is no question, we need that to move on because of time, I'll pass the comment on from Andrew Campling and we can move right along to our next talk. And that will be ‑‑

SHANE KERR: Next is the RIPE NCC DNS update by Anand.

ANAND BUDDHDEV: Hello. Good afternoon to you all from a very grey and rainy Amsterdam. Welcome to the RIPE 82 DNS Working Group RIPE NCC update.
.
This afternoon, I'll be speaking briefly about some of the stuff that we have been doing and some of our upcoming plans in the coming months.
.
So, first up, I would like to talk about CDS for reverse DNS. So we have talked about this at previous RIPE meetings and a CDS or CDNSKEY is the popular name for a pair of RFCs that define how a parent zone operator can detect key rollovers in child zones and updates the parent zone automatically. So the RIPE NCC community asked us to implement this for the reverse DNS zones that the RIPE NCC operates. And I am happy to report that we are live since the 25th March this year.
.
What we have in place is some code written in Python by Ondrej Caletka, and once a day at 7 UTC, there is a multi‑threated Python scanner that goes away and scans all the secure delegations in all the reverse zones, and detects CDS records for ones that might be in a state of key rollover.
.
It then produces a bunch of domain object updates. These domain object updates are then sent into the RIPE database by an update script, and a few minutes later the provisions kicks in and the DS records appear in the DNS.
.
At the moment, what we have is a limit of 100 objects that may be updated automatically and the reason we did this is to observe the update process. We didn't want a situation where there might be a huge update because of some kind of bug or issue and break DNS for lots of people. So if there are more than 100 domain objects in the queue, then the operators get an alert, or a ticket about this, and we can investigate and update things by hand.
.
We haven't had a situation like that so far, but we will keep this limit in place for a while longer.
.
I would like to point out, as we have also said it before, that this process does not allow for any kind of bootstrap, so if you have newly signed your reverse DNS zones and you don't have DS records in the parent zone, our process will not automatically pick up the CDS records and update the parent zone.
.
This bootstrap requires a little bit more state, and is a little bit more complex to do. So we decided not to do it, at least not in the first phase.
.
It's something we could consider if there is a demand for it, but for now we're not allowing bootstrap.
.
But of course if an operator wants to go from secure to insecure by deleting their DS record, they can still signal this via a special kind of CDS record and this is allowed, because we can of course validate the CDS record and thereby understand the operator's intention.
.
Here is a little chart showing you some numbers. From the 12th April ‑‑ so this is just a little bit after we went live ‑‑ until the 16th ‑‑ which was a few days ago ‑‑ and this chart shows you the number of domain objects updated on a daily basis. So, mostly it's just 0 or 1 update. We have one user, which is Ondrej himself, who frequently rolls his keys, and so his zone gets updated. But in between we also see some more updates in there. You will see some spikes at 41, 40:35 and this is when a particular operator, for example, rolls the keys in all their reverse DNS zones, and then we get a batch of updates.
.
As I mentioned earlier, we have a limit of 100, but this limit has not yet been triggered.
.
I'll move onto the operations of k‑root. The RIPE NCC continues to successfully operate one of the root name servers which is k‑root. Currently, we have 83 active sites and a total of 93 servers in these sites. This is because some sites have multiple servers behind the router. Most of the other sites are single Dell servers which are DNS in a box and do the BGP service themselves.
.
At the core sites that we have, we have five core sites for k‑root. We have been doing regular maintenance replacing the routers and the servers and this work continues as hardware life cycle dictates. We have also been upgrading some of the ports of some of the core sites from 1 gigabit to 10 gigabit and we will continue this in the coming months.
.
Another thing to note is that DITL took place earlier this year, that stands for 'A Day in the Life of the Internet', and this is when various DNS operators on the Internet submit PCAP files containing queries and perhaps responses, DNS queries and responses, for a period of 50 hours, and this data is stored by DNS quashing and is available so researchers who might want to do some kind of research with DNS. So, this year, k‑root participated, and from 83 of our servers, we uploaded 50 hours worth of PCAP files into the DNS‑OARC
.
I'll move over to ALT DNS, this is the RIPE NCC's second Anycast DNS service and this is where we provide service to ripe.net as well as all the reverse DNS zones that the RIPE NCC operates. We have three counter‑sites in Amsterdam, London and Stockholm and then we have three hosted sites in Vienna, Rome and Oslo.
.
Just last week, the Amsterdam site has been upgraded to a 10 gigabit network with a new router in place and we also plan to do some upgrades to London and Stockholm in the coming weeks and months.
.
One of the interesting things I would like to mention here is that we want to expand the footprint the ALT DNS Anycast DNS network and we have an application for this ‑ well, we have an application that manges the hosted k‑root applications that we get in, but we are now expanding that to cover ALT DNS, and as soon as it is ready we will be accepting applications to host instances of this service, and the requirements for the hosts, as in the hardware, the bandwidth, these will be published very soon.
.
Next up, I would like to talk about DNSSEC algorithm rollover. So the RIPE NCC has been assigning its DNS zones with algorithm 8, and we are following recommendations from the latest RFCs which recommend Algorithm 13 now. We talked last year about doing this algorithm rollover in 2021, and this is what we're busy with, so testing is already in progress. We have our Knot DNS signer and we have a test zone that we're going to roll the algorithm of, and we will check that all the steps are performed correctly.
.
We will also be querying all the major resolvers to check that they can validate the records in this zone.
.
We are not actually expecting any obstacles or issues because algorithm rollover has been performed previously. We rolled our algorithm from 5 to 8, and there was great deal of experience then. We also wrote a RIPE Labs article about this but many people have also performed algorithm rollovers so we don't expect any issues. We are also aware that there is widespread support for Algorithm 13 in validators, so we don't expect any issues there either.
.
So just soon after RIPE 82 we will announce the dates when we will be starting the rollover of our ‑‑ of the algorithm of all our zones.
.
And that brings me to the end of my short presentation. Thank you for listening, and if you have any questions, please ask away. Feel free to also send e‑mails to my e‑mail address if you want to discuss anything offline.
.
Thank you.

SHANE KERR: Thank you, Anand. That was good and informative. Unfortunately, I don't think ‑‑ we're running a little bit short on time and I don't think we're going to be able to take questions live. So, I'd like to ask ‑‑ I see there is already one question in the Q&A. Maybe you can answer Morris Mueller in the chat or contact him directly.
.
Sorry about that.

ANAND BUDDHDEV: Okay.

SHANE KERR: Sorry. Great. So, let's move along to our next mention which is on deployment of the CDS and CDNSKEY by Ondrej Caletka.

ONDREJ CALETKA: Hello, I just share my slides. Hello everybody. My name is Ondrej. I work for the RIPE NCC, and I was asked to give an overview of the CDS/CDNS Key updates ecosystem. So, how does it work, whether this technology is still working? Because as you heard from Anand just recently, we have now deployed it for the DNS zones that are managed by the RIPE NCC, so you can use it not only for reverse zones but actually also for ENUM, if you know what ENUM is. It's part of the RIPE database, just like reverse zones.
.
To be on the same page in DNS and DNSSEC, every zone is an island. It has its own signatures, its own public key, and what makes that part of the DNS zone trusted is the delegation from the parent zone, which is made by the DS record. The DS record is a particle hash function hashing over the public DNSKEY and also zone name. So even if you share a key between zones, you still have to have different DS records for different zone names.
.
How do you usually update it? There is usually two standard ways to do it. The first one is to submit DS records as a child if you want to change or submit first DS records, you just submit them directly, either using something called EPP, Extension Provisions Protocol, which is sort of standard for registration of DNS, so if you are a registrar and have a contract with a registry, you usually use EPP to talk to the registry.
.
If you are not a registrar, then you actually have to e‑mail the ‑‑ e‑mail your registrar to do it because most registrars don't even have web interface or something like API that you could use to submit DS records. So it's pretty, again, usually manual work.
.
And if you are your own parent, what you can do is you can make a zone note which is a text file with DNS resource records, and this text file you just somehow include into the text ‑‑ into the zone file of the parent zone, this is how you make it the easiest way.
.
But, there are some registries that actually prefer a slightly more complicated way, and it is that you actually submit to them only the DNSKEY, the public key for your zone, and they will do the DS calculation themselves.
.
This is the only option for registries like .eu or .cz and also many others, but I would say slight minority of the registries.
.
The reason for this is basically what I said in the previous slide, mostly to facilitate key sharing, which is very, very good advantage for, like, web hosting providers which host thousands of zones and they want to deploy the DNSSEC on all of them, it's much easier from the operational point of view to have just one pair of keys instead of one key per zone, and it's not very strictly ‑‑ it's not very bad for security either, so it's not a big deal.
.
And the other thing is that, is that with submitting DNSKEY, it's actually the registry which decides what kind of hashing algorithm will be used because there is ‑‑ there are four hashing algorithms supported in DS records, there are SHA1, SHA256, GOST and another SHA, so right now we are in the process of taking out SHA1, so probably if you don't want to introduce SHA1 in your registry and you accept DS records directly, then you have no controls over what the child's are submitting to you so this maybe another valid reason why you accept DNSKEY. And that is why the update system is ‑‑ has that very complicated name of CDS/CDNSKEY because the idea here is basically the child ‑‑ that's the C in the name ‑‑ is providing sort of indent signalling in the DNS itself for the DS record, it should be the parent but it's not only signalling the DS record in CDS, but may also signal the DNSKEY record by using CDNSKEY. So I looked up how this is actually specified. I found out that if you are a child and you want to publish it, you should, in the RFC term, should publish both, and if you do that they must match. And I found out that this is mostly like a fulfilled by all the implementation I am aware of.
.
So that's pretty good.
.
On the parent side it depends and usually parents can do either one or the other, it depends on what the parent is willing to work with.
.
This protocol works like that, it's for signalling the change so it means that if there is no CDS or CDNSKEY, that means no change and everything should stay like it is. And if you want something changed, you should publish it and then the parent should do the change.
.
And there is also another option to publish something that will make the DS disappear from the parent, if you want to switch from DNSSEC secured to insecure.
.
There is also option to bootstrap from insecure to secure. There are ‑‑ there is a few options how to do this, but the most common and the only one that I am aware of is that actually used is to query all the authoratitive servers to find out whether the CDS answer is constant and consistent among them for such some longer time period, so you sort of think that if ‑‑ the answer is consistent for a few days from all the servers you consider it valid even though you have not proof.
.
So, this is the list of registries I made up from mostly from my memory and from what I was able to find out.
.
So first registry that started it was .cz in 2017, I think. They do ‑‑ they use CDNSKEY because they use DNS keys everywhere in the registration system. They support deleting DNSSEC or going to insecure and they also support bootstrapping from insecure by observing the same CDNSKEY record for seven days. The same registration software is used in Costa Rica's registry and Costa Rica's registry deployed this scanning as well, so even though I haven't found any information on the website of nic.cr, I am pretty confident that it works exactly the same.
.
The second registry that deployed it is Switch for Swiss and Liechtenstein domain. Again, it's the same registry so the conditions are pretty the same. They use CDS records so they use directed yes digests and the grace period for bootstrapping from insecure is just 72 hours, and the last one ‑‑ the last two ‑‑ the last one is the RIPE NCC and before that there was .sk who started this in 2020 actually, and they ‑‑ from the information they publish, it seems very similar to the Switch with 72 hours.
.
As you can see, all of them actually support going insecure, and the RIPE NCC is the only one which does not support going bootstrapping. This is something that we may consider if we find out that it's like, worth it, because it's a little bit tricky.
.
Regarding the provider side who the make use of this signalling. The first one that offered this service was CloudFlare. I have checked recently and they support publishing both CDS and CDNSKEY and they also support deleting, so if you want to switch off DNSSEC you just click turn off DNSSEC and what happens is they will not turn it off immediately but they will immediately start signalling that you want to delete DS records and once they notice that the DS record is gone then they probably stop signing the zone.
.
I also found out about a provider called DNSimply. GoDaddy uses this signalling and also I got some rumours about Google domains which is actually a registrar, so they should be able to do these changes using the EPP, the standard way, but for some reasons, probably CDS works better for them.
.
If you want to host DNSSEC yourself, I have pretty good news because I found out that most major open source DNS server solutions support CDS very well, so not only Knot DNS, which is developed by cz.nic and they sort of like push this up, but there is also very decent support in BIND 9 and Power DNS. Knot DNS still has the only ‑‑ is the only one which supports like fully automated KSK rollovers. It's very nice to watch. There is a part of logo in the bottom of the slide, so you can see if there is a time to change the DS record, it will actually publish the serious records and then it will start probing some configured records. In my case, I configured a loop back and also the quad 8 and quad 1 resolvers. And only after all of them are consistently like showing the new DS record, it will find out that the change was successful and it will push the rollover forward. So you don't have to make any interventions.
.
For the others, as far as I know, you still have to put the outcome and saying, oh, now, the parent has changed the DS so you can continue with the process of changing keys.
.
All of them also published both CDS and CDNSKEY. So this is not an issue.
.
On the parent side, if you are the parent, either registry or registrar, or just you have some zones that are in parent/child relationship, the easiest thing to do is actually part of BIND, which is called DNSSEC CDS. This tool will help you keep the zonelets with the DS records up to date by clearing for CDS, or even for CDNSKEY records. This is the one exception of parent side where parents can actually make use of both type of records.
.
Other than that, there is also this FRED registry, as I told you already, which is open source. And so even the CDNSKEY scanning part is open source and you can try to use it for your own deployment if you are going to do some massive scannings and even the bootstrapping from insecure, it may be handy. On the other hand, the documentation is very, I would say, sparse, so you will probably have to ask cz.nic if you want to use it in a way they didn't expect.
.
So to wrap up this talk, the adoption of this system is slowly growing. Good news is that it's already supported pretty well in DNSSEC software, so I am really glad that my update of my reverse zone that I am rolling KSK every second day. It's not the only one with the updates on the RIPE database, but there are also other people that are just waiting for us to deploy the CDS scanning and they started using it and changing their keys without interventions.
.
It also seems to me that the ‑‑ this sort of signalling is nice single standard way to perform updates in all supporting registries. So even if you are a registrar with EPP access to registries, you still have, you know, there are dialects of EPP, like EPP that you talk to .cz is different from what you talk to epp.com. So there is even a benefit for those people to use this technology to have this ‑‑ to have this standard where you communicate the intentions to change the key.
.
The registries, yes, are slowly growing. Well we are still waiting for a killer feature if somebody like dotcom deployed that, it would be really, really nice. But in the scale of dotcom, like regular scanning would be a serious challenge, I would say.
.
And one more thing. I created a channel called 'CDS Updates on the channel of DNS‑OARC', so if you want to talk about CDS updates or this protocol in general, please join the chat. There is also a link for a GitHub repository I set up in the information that I just presented here.
.
So that is everything from me. And in case you have any questions...

SHANE KERR: Cool. Thanks, Ondrej, this has been really good. I actually find this really nice. The last feature of the DNS is going to make things a lot handier going forward, I hope.
.
Unfortunately, we're running really short on time, so, we're not going to be able to take any questions, but I guess you will be on the chat, so people can asks questions on the chat there.
.
Thank you. For the rest of everyone, we're going to play a presentation that was recorded earlier. It's going to be about 13 minutes, I know it's going to go way into the break and I really apologise for that, but we miscalculated our time, I am afraid. So let's see what we are here.
.
BERTH HUBERT: Hello, and welcome this this presentation on the new and fascinating EU directive on network and information security Version 2. And you may have heard about this directive. It might impinge on the way how the Internet is being run, how the route servers are being regulated and this presentation I hope to tell you a little bit about what is going on and what is good and what is bad, because despite what you might have heard, it is not all bad.
.
For full disclosure, like many of you, I actually like the European Union, so I could spend ten minutes bashing all these regulations and how stupid it is, but I actually somewhat see the point of it.
.
It also helps that I'm actually myself a government regulator so I look somewhat favourably upon regulation. But not all is lost, I am also still a big nerd and I am still the founder of PowerDNS so I am also still convinced that we know how to run the Internet ourselves without help from Brussels.

So what's the idea behind the EU network and information security directive? In short, I see it as a way of taking computers seriously. So, we have regulation on how the water supply works, how electricity works and on communications reliability and for healthcare and other systemic things, and it must be said we rely on computers in a very big way and it's not unreasonable to ask people to take information security seriously, especially because they are not doing it out of their own accord.
.
So, already in 2018, the European Union decided to create the NIS directive and specified how countries should cooperate and exchange information but it also clarified that specific companies, important companies and organisations should be regulated. And this regulation should be about how they notify the world of security breaches, but also mandates that they have certain cybersecurity measures and practices in place, and it says that there might be fines if these things are not in place.
.
Now, no one has heard of the NIS directive. It was written in 2018, and it didn't really work.
.
Some countries took it very seriously, and there is even one country that has listed every hospital as an important service provider, and there are other countries that have listed almost no one. And also, the implementation of fines in case the security is not good was also not unified very well. So the NIS directive was not a great success.
.
Meanwhile, however, the need for good information security has not gone away. So, the European Union came back with NIS 2, the return of Brussels. And in this case, they made it clear, they said look we really mean, it we really mean it when we say that we want you to do information security and take it seriously.
.
So, they have now said, look, we are not going to leave it to individual countries to decide who they will regulate or not. We are going to make a list for you and, at the same time, they have said we are going to make the list somewhat simpler.
.
And so there are only two criteria left in the NIS 2 and these are called important entities and essential entities, and you can imagine that the rules for essential entities are more stringent than for important entities. And just to be clear, that no one has any doubt, they added a list of things they are going to regulate. And this includes Internet Exchanges, country code top‑level domains, large scale authoritative servers, particle resolvers, data centre service providers, content delivery networks and route servers.
.
So, they made it very clear, Brussels is here to mess with your life, and this will apply to many of the people listening to this presentation, even if you are not in the EU. The important part of this regulation is, are you essential to the EU? And in that case, the European Union is coming for you. So will this apply to me?
.
Well, for ‑‑ sometimes it's very clear, so if you run .nl or .be or .dk then yes, this will very much apply to you. It's in there explicitly, country code, top‑level domain operators will be regulated. If you are a large scale Internet access provider, you will be regulated, but you were already regulated in that case.
.
One version of the NIS 2 directive explicitly says that all route servers will be regulated and another version says that it won't be. So that's a bit up in the air.
.
So, will you be regulated? Well, if you are a small place, if you have less than 50 employees or revenue of less than €10,000,000, then you will likely not be regulated. So that's good news because the original version of the NIS would have applied to many hobbiest labs, because they did sufficient numbers of DNS queries to be systematically important. And here it says no, the really small companies don't worry.
.
But there is an exception. It is possible that you are really small but that you are still providing a key role for society. So even if you are a small player and you are doing extremely important things, this regulation could apply to you.
.
So, they are coming for you.
.
And what does it mean? It means that if you have a security incident, you must report is pretty damn quickly to your national authority. If you are not in the EU, you must pick a country within the EU that represents your biggest place here, and then you must report the incident there.
.
The reporting stuff is easy, of course. But you must also implement security measures, and these are not very well described in the directive. You must do risk analysis and incident handling and you must have business continuity plans and, very importantly, you must also know if your suppliers have these plans, and by this I do not mean your coffee supplier, but let's say you have outsourced all your servers to Amazon, then you must be sure that Amazon also has a business continuity plan.
.
You must have a plan for vulnerability handling, disclosure, you you must do testing and audits of the effectiveness of your cybersecurity measures and you must use cryptography.
.
If you don't do these things, the EU or your national authority can send you a warning or they can order you to fix it, so let's say they find that your security is not good, they can order you to fix. And if you don't respond to that order, they can give you a fine. And some countries might even choose to start a criminal procedure against you, or they can tell you you must stop doing business because you are not doing it safely. And in an extreme measure, they may even issue a temporary ban for certain persons to take part in your management and this might sometimes of course be very welcome, but it's quite extreme. If you don't cooperate, the EU can also come on site to inspect if your security is good and they might even do random unannounced visits. So, it is quite some stuff.

It's clear that they will not do this for everyone and it's also clear that if you are only an important service provider, then this stuff will only happen after you have had a big breach.
.
If you are an essential service provider, the EU can ask, or the government, your local government can ask to see these plans, to make sure that you are ready.
.
So this is quite something, and I can understand at this point if you feel angry and want to set fire to this EU flag. Because why are these people messing with us? And you will find that even setting fire to this EU flag is not that easy, because of many EU regulations these flags actually do not combust, so it's an extremely frustrating experience.
.
How bad is this? Well, of course, no one wants to be regulated and get certified. We have no good experiences with that as technical people. It's typically bullshit. Although ISO 27,001 can also be useful.

One other thing to notice is all the things the EU demands of you, every security person is also trying to ‑‑ working very hard to get their own company to do those things. We don't get taken seriously when we ask management to have a security man in place or take security seriously, and now the EU says, well, you have to take it seriously. So, it could actually be good. Actually, if you look at the list of things that you have to do, it is quite hard to argue that you do not want to have a plan in place to deal with vulnerabilities.
.
The big question, however, is how will this work in practice? And the rule might end up being terrible and add a lot of paperwork while not actually being useful, and sadly a lot of EU regulation has ended up this way. So it is worrying, it could end up very badly.
.
NIS 2 also has some specific things to say about WHOIS, and I know this is a very controversial subject. Some people say it's a huge violation of privacy and others say everyone should be able to see it. The EU has struck some middle ground; they said,the WHOIS data has to be correct and registries have to have plans in place to actually verify that they have ‑‑ they know who owns a domain name, and they also said that if the domain name is not a natural person but a company, then you should just make that public who owns the domain name, but ‑‑ and the rest, of course, should not be public. But the EU also says that if a legitimate access seeker shows up at the registry, they must pretty quick quickly get the data they need that is associated with a domain name.
.
I have tried to figure out who a legitimate access seeker is and they have cleverly not defined it. So this probably means police and law enforcement, but it might even include universities, researchers and security companies.
.
If you look at the NIS, there are other things for ccTLDs in there, and I do recommend that you give it a good look.
.
Now, here the controversial part. The European Commission was feeling good about itself, the GDPR is nice and they said we are also going to regulate the root servers of the Internet because they are super important, and of course they are right. The root is super important for the Internet, that's why many of you take so good care of it.
.
But it sucks to have the EU regulate root servers, because the thing is, it is pretty strange that the European Union wants to do an audit on the US Department of Defence, for an example. Because, yeah, they run root servers and, in theory, the EU could show up and ask them how well are your security plans doing and can we do some random audits on the US Department of Defence.
.
It's not nice for people like ISC, for example, that want to operate root servers, but do not want to comply with EU regulations, especially since if the EU decides to regulate the root servers, many other countries might also decide they want to regulate the root and this would lead to a huge pile of regulation and that's why we invented ICANN, to not have every government to try to impose their own will on the Internet.
.
Now, the good news is that the good people of the RIPE NCC have managed to convey to the European Parliament that it would suck if the EU would try to regulate the root servers, and the European Parliament has included an amendment in its report that will actually remove the root servers from the NIS directive. So, that's a very good start, but it still has to happen. And the key reason why the root servers are not essential is that the root is essential but there are so many root servers that you can actually shut down most of them and no one would notice. Every one of the root server operators provides exactly the same servers as the other root servers, so they are not essential.
.
Now, at this point you might feel like, yeah, I am angry and I want Brussels to stay out of my servers. Well, I can agree. Brussels is going to come for your servers, and we need to make the best of it.
.
So we need to engage with governments and need to engage with Brussels. There are two things that you really should not be doing.
.
I have heard many people in the scene say, look, we are doing so well here on the Internet, we don't need any regulation. And the problem is that is, of course, true for you and for all my friends and they are all great. It is not true for the IT sector in general or for the Internet in general, it leaks like a sieve, it is not good. So it's not credible for us to say we don't need any regulation, even though we and our friends are doing a great job.
.
And the other thing is that people have said, look, I may run DNS for a million domain names and for 25 countries around the world but we're not essential. That's not important what we do, and that is not credible. We should take ourselves seriously. The work we do is important and if we mess it up there are serious consequences.
.
What we do need to make sure is that the definitions of who isn't essential service provider are good enough that you don't become an essential service provider because you run, let's say, an NTP server, unless you run 1,000 NTP servers and you are telling people they all should be using your server, then you are important.
.
So we need to make sure that the definitions are good.
.
We also need to make sure that the implementation of the general principles make sense, because the NIS directive says you must have plans for business continuity And risk analysis. Well, I can do a risk analysis and it could fit on a Post‑it note or it could fit on 50 pages and I'm not sure how big an analysis is big enough, so we should make sure that the kind of risk analysis that we all should be doing is actually good enough for the EU, and not that they demand, they say, look, we need like 500 kilos of paperwork and otherwise we don't believe it.
.
How do we get these people to make these correct choices? We can follow the RIPE NCC's lead and engage with Brussels, but also, very important, every country has a telecommunications department or ministry, they also care about this, they have a very big voice in Brussels, so if you are from an EU country and you have contacts with your government, please make sure you are in the loop because they will be relying on you to hear what is reasonable, because if we don't show up, they will only hear people from big accountancy and consultancy firms talking to them and they will say that no business continuity plan is complete unless it has 500 pages and that's not what we need.
.
So if you have any influence, please make sure that you make yourself available to these governments and you'll be surprised to learn that they actually care about your supports because they don't also want to pass ridiculous legislation.
.
Rounding up with this, I hope I have made it clear that the NIS directive is an important thing, and that it could end up being terrible, it could also end up being useful, and if we want to make it useful, the way to get there is not to tell people that they are all stupid in Brussels and that we don't need to be regulated because the Internet is perfect already, we are not going to be ‑‑ credibly be able to claim that, but we are going to be able to have a role in explaining who is essential, who is not essential and what reasonable things can be done to enhance our security.
.
Thank you.

SHANE KERR: All right. Well, I don't know if that was inspirational or not, but we are way way over time, so thank you everyone for hanging in there and listening to Berth's talk. Please feel free to go ahead and chat about it, and talk to people about it. And we can also discuss on the DNS Working Group mailing list.
.
So thank you everyone again and we will see you on the Internet. Bye.
.
(Coffee break)