Skip to main content

default refresh periods for dynamic dns

i wrote this article on dns aging/scavenging simplified awhile back.  one of my coworkers recently asked me what the default refresh period was.  wow, i had totally forgotten since i had written it and since i had forgotten to put it in the original post, it was more time on google than i wanted to spend to find it.  that means – blog it.  so here it is… the default refresh periods. 

you can find this information from this article: http://technet.microsoft.com/en-us/library/cc757041.aspx.

service default refresh period
net logon 24 hours
clustering 24 hours
dhcp client

24 hours

The DHCP Client service sends dynamic updates for the DNS records. This includes both computers that obtain a leased Internet Protocol (IP) address by using Dynamic Host Configuration Protocol (DHCP) and computers that are configured statically for TCP/IP.

dhcp server

Four days (half of the lease interval, which is eight days by default).

Refresh attempts are made only by DHCP servers that are configured to perform DNS dynamic updates on behalf of their clients, for example, Windows 2000 Server DHCP servers and Windows Server 2003 DHCP servers. The period is based on the frequency in which DHCP clients renew their IP address leases with the server. Typically, this occurs when 50 percent of the scope lease time has elapsed. If the DNS default scope lease duration of eight days is used, the maximum refresh period for records that are updated by DHCP servers on behalf of clients is four days.

 

and while i’m at it, if you wanted to change some of these defaults, you can do this by group policy as of windows 2003.  i guess that should be pretty old news by now.  here’s the link for that article: http://support.microsoft.com/default.aspx/kb/294785.

Comments

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m