We’ve been working with the two Atom D510 servers we intend to use as dedicated route reflectors and came across a small problem we though prudent to share. (Original post: Atom D510 Servers as Route Reflectors) It’s not a showstopper, rather it should be classified as “annoying”.
The Atom servers come with on board IPMI 2.0 capabilities. It’s great for the price – remote KVM, virtual media, sensors, and more. However, we’ve discovered that the sensors don’t seem to coexist well with the operating system (in our case, OpenBSD) polling them as well. Eventually the sensor values will go out of bounds high, low, disappear or return incorrect results and requires a power cycle to reset them. So, we ended up using config(8) to disable the iic* and wbsio* devices to prevent the kernel from accessing them.
Instead, if local sensor monitoring is needed then using ipmitool to talk to the on board IPMI IP address is an acceptable workaround, although not as clean cut. We suspect there is a contention issue between the on board IPMI plus the OS accessing the sensors that eventually leads to the issue we observed.
This does not affect the Atom 330 since it doesn’t have onboard IPMI. Furthermore, reading the 330’s sensors through the operating system reports additional data such as fan RPM (with our case fan mod) whereas this data was only presented via IPMI on the D510.
Someone had to be first, and it looks like it’s APNIC. Earlier this year all of the regional registries were given a final /8, and APNIC is the first to reach the end game. Have you thought about your IPv6 plans lately?
Dear APNIC community
We are writing to inform you that as of Friday, 15 April 2011, the APNIC
pool reached the Final /8 IPv4 address block, bringing us to Stage Three
of IPv4 exhaustion in the Asia Pacific. For more information about Stage
Three, please refer to:
Last /8 address policy
IPv4 requests will now be assessed under section 9.10 in “Policies
for IPv4 address space management in the Asia Pacific region”:
APNIC’s objective during Stage Three is to provide IPv4 address space
for new entrants to the market and for those deploying IPv6.
>From now, all new and existing APNIC account holders will be entitled
to receive a maximum allocation of a /22 from the Final /8 address
For more details on the eligibility criteria according to the Final /8
policy, please refer to:
Act NOW on IPv6
We encourage Asia Pacific Internet community members to deploy IPv6
within their organizations. You can refer to APNIC for information
regarding IPv6 deployment, statistics, training, and related regional
To apply for IPv6 addresses now, please visit:
APNIC Secretariat firstname.lastname@example.org
Asia Pacific NetworkInformation Centre (APNIC) Tel: +61 7 3858 3100
PO Box 3646 South Brisbane, QLD 4101 Australia Fax: +61 7 3858 3199
6 Cordelia Street, South Brisbane, QLD http://www.apnic.net
* Sent by email to save paper. Print only if necessary.
From time to time we get questions on DNSSEC support. There are many parts to DNSSEC, but here’s we we stand as of this post:
Our Secondary DNS service (which is based on BIND9) has supported DNSSEC for several years and we have received confirmed reports from some of our customers that use the secondary service that it does work. The Primary DNS service does not support it at this time since it’s based on a version of PowerDNS that lacks DNSSEC support. However, the next release version of PowerDNS will have it, at which point we can work on integrating it into our control center.
On the network side we do not employ any type of mechanisims that try to be “smart” with manipulating DNS traffic incorrectly. Further to that, both UDP and TCP are open for DNS traffic. Contrary to popular belief, DNS queries can use TCP for queries other than AXFR if the UDP query failed, so we allow both.
Over the next month or two we will be migrating our network to pure gigabit Ethernet. All servers – even the smallest Atom dedicated server – will be given gigabit uplinks. Because this change involves moving the cross connects on our side, we will be contacting all dedicated server and colocation customers individually to arrange maintenance windows.
We’re also increasing the base RAM included with our dedicated servers to 8GB wherever possible. This excludes both of the Atom options, but not by choice; the Atom chipsets unfortunately only support 2GB (330) or 4GB (D510) maximum.
We’re provisioning a pair of our Atom D510 dedicated servers to be used as OpenBSD BGP route reflectors within our network core. Over the last month we’ve been testing this setup with great success on a spare D510, so we went ahead and ordered a pair to dedicated to the task. In the pictures you can see the inside of the D510 case with a couple changes from its stock config:
- A 2-port Intel PCIe Gigabit Ethernet adapter. We needed at least three Ethernet ports: one for management and two for diverse links into network core for redundancy.
- There is no hard drive: the boot device is the blue 4GB USB flash drive.
- A high-speed 15,000 RPM fan at the front air vent for airflow in an otherwise fanless case. Initial testing shows a 10 degree reduction in reported temperature. The fan is quite loud but moves a significant amount of air. We opted for a powerful fan so we wouldn’t need to add baffles. (It might even be too much fan!)
Inside view of the D510 (with add-ons)
As a service provider we believe in the products we sell and utilize them in our own operations – we “eat our own dogfood” as the saying goes. These servers running OpenBSD fill the role as dedicated route reflectors (a CPU heavy role) perfectly at a fraction of the cost of a name-brand solution. As a customer, you could use this type of setup as a router for a cluster of dedicated servers or combine it with colocation.
Rear view of the D510 (with PCIe Ethernet card)
These customizations can be ordered options on our dedicated server offering. The additional fan is something we’re trying on ourselves first, after which we plan to offer them to our existing dedicated server customers as an option at no cost. The fan will also become standard equipment. We’re also testing a slightly lower RPM fan (10k vs. 15k) in the interest of noise reduction.