• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • My recommendation is to start with getting fax to work locally. As in, from port 1 of a single SPA2102 to port 2 of the same. This would validate that your fax machines and the SPA2102 is operational, and is just entertaining in its own right to have a dialtone that “calls” the other port.

    Fortunately, Gravis from the Cathode Ray Dude YouTube channel has a writeup to do exactly that, and I’ve personally followed these steps on an SPA122 with success, although I was doing a silly phone project, not fax project. https://gekk.info/articles/ata-config.html

    If you’re lucky, perhaps fax will Just Work because your machines are very permissive with the signals they receive and can negotiate. If not, you might have to adjust the “fax optimizations” discussed here: https://gekk.info/articles/ata-dialup.html

    And then once local faxing works, you can then try connecting two VoIP devices together over the network. This can be as simple as direct SIP using IP and port number, or can involve setting up a PBX that both devices register against.



  • Thank you for that detailed description. I see two things which are of concern: the first is the IPv6 network unreachable. The second is the lost IPv4 connection, as opposed to a rejection.

    So staring in order, the machine on the external network that you’re running curl on, does it have a working IPv6 stack? As in, if you opened a web browser to https://test-ipv6.com/ , does it pass all or most tests? An immediate “network is unreachable” suggests that external machine doesn’t have IPv6 connectivity, which doesn’t help debug what’s going on with the services.

    Also, you said that all services that aren’t on port 80 or 443 are working when viewed externally, but do you know if that was with IPv4 or IPv6? I use a browser extension called IPvFoo to display which protocol the page has loaded with, available for Chrome and Firefox. I would check that your services are working over IPv6 equally well as IPv4.

    Now for the second issue. Since you said all services except those on port 80, 443 are reachable externally, that would mean the IP address – v4 or v6, whichever one worked – is reachable but specifically ports 80 and 443 did not.

    On a local network, the norm (for properly administered networks) is for OS firewalls to REJECT unwanted traffic – I’m using all-caps simply because that’s what I learned from Linux IP tables. A REJECT means that the packet was discarded by the firewall, and then an ICMP notification is sent back to the original sender, indicating that the firewall didn’t want it and the sender can stop waiting for a reply.

    For WANs, though, the norm is for an external-facing firewall to DROP unwanted traffic. The distinction is that DROPping is silent, whereas REJECT sends the notification. For port forwarding to work, both the firewall on your router and the firewall on your server must permit ports 80 and 443 through. It is a very rare network that blocks outbound ICMP messages from a LAN device to the Internet.

    With all that said, I’m led to believe that your router’s firewall is not honoring your port-forward setting. Because if it did and your server’s firewall discarded the packet, it probably would have been a REJECT, not a silent drop. But curl showed your connection timed out, which usually means no notifications was received.

    This is merely circumstantial, since there are some OS’s that will DROP even on the LAN, based on misguided and improper threat modeling. But you will want to focus on the router’s firewall, as one thing routers often do is intercept ports 80 and 443 for the router’s own web UI. Thus, you have to make sure there aren’t such hidden rules that preempt the port-forwarding table.


  • I’m still trying to understand exactly what you do have working. You have other services exposed by port numbers, and they’re accessible in the form <user>.ducksns.org:<port> with no problems there. And then you have Jellyfin, which you’re able to access at home using https://jellyfin.<user>.duckdns.org without problems.

    But the moment you try accessing that same URL from an external network, it doesn’t work. Even if you use HTTP with no S, it still doesn’t connect. Do I understand that correctly?



  • My last post didn’t substantially address smaller ISPs, and from your description, it does sound like your ISP might be a smaller operator. But essentially, on the backend, a smaller ISP won’t have the customer base to balance their traffic in both directions. But they still need to provision for peak traffic demand, and as you observed, that could mean leaving capacity on the table, err fibre. This is correct from a technical perspective.

    But now we touch up on the business side of things again. The hypothetical small ISP – which I’ll call the Retail ISP, since they are the face that works with end-user residential customers – will usually contract with one of more regional ISPs in the area for IP transit. That is, upstream connectivity to the broader Internet.

    It would indeed be wasteful and expensive to obtain an upstream connection that guarantees 40 Gbps symmetric at all times. So they don’t. Instead, the Retail ISP would pursue a bustable billing contract, where they commit to specific, continual, averaged traffic rates in each direction, but have some flexibility to use more or less than that commited value.

    So even if the Retail ISP is guaranteeing each end-user at least 40 Gbps download, the Retail ISP must write up a deal with the Upstream ISP based on averages. And with, say, 1000 customers, the law of averages will hold true. So let’s say the average rates are actually 20 Gbps down/1 Gbps up.

    To be statistically rigorous though, I should mention that traffic estimation is a science, with applicability to everything from data network and road traffic planning, queuing for the bar at a music venue, and managing electric grid stability. Looking at historical data to determine a weighed average would be somewhat straightforward, but compensating for variables so that it can become future-predictive is the stuff of statisticians with post-nominative degrees.

    What I can say though, from what I remember in calculus at uni, is that if each end-user’s traffic rates are independent from other end-users (a proposition that is usually true but not necessarily at all times of day), then the Central Limit Theorem states that the distribution of the aggregate set of end-users will approximate a normal distribution (aka Gaussian, or bell curve), getting closer for more users. This was a staggering result when I first learned it, because it really doesn’t matter what each user is doing, it all becomes a bell curve in the end.

    The Retail ISP’s contract with the Upstream ISP probably has two parts: a circuit, and transit. The circuit is the physical line, and for the given traffic, probably a 50 Gbps fibre connection might be provisioned for lots of burstable bandwidth. But if the Retail ISP is somewhat remote, perhaps a microwave RF link could be set up, or leased from a third-party. But we’ll stick with fibre, as that’s going to be symmetrical.

    As a brief aside, even though a 40 Gbps circuit would also be sufficient, sometimes the Upstream ISP’s nearby equipment doesn’t support certain speeds. If the circuit is Ethernet based, then a 40 Gbps QSFP+ circuit is internally four 10 Gbps links bundles together on the same fibre line. But supposing the Upstream ISP normally sells 200 Gbps circuits, then 50 Gbps to the Retail ISP makes more sense, as a 200 Gbps QSFP56 circuit is internally made from four 50 Gbps, which oftentimes can be broken out. The Upstream and Retail ISPs need to agree on the technical specs for the circuit, but it certainly must provide overhead beyond the averages agreed upon.

    And those averages are captured in the transit contract, where brief exceedances/underages are not penalized but prolonged conditions would be subject to fees or even result in new contract negotiations. The “waste” of circuit capacity (especially upload) is something both the Retail ISP (who saves money, since guaranteed 50 Gbps would cost much more) and the Upstream ISP willingly accept.

    Why? Because the Upstream ISP is also trying to balance the traffic to their upstream, to avoid fees for imbalance. So even though the Retail ISP can’t guarantee symmetric traffic to the Upstream ISP, what the Retail ISP can offer is predictability.

    If the Upstream ISP can group the Retail ISP’s traffic with a nearby data center, then that could roughly balance out, and allow them to pursue better terms with the subsequent higher tier of upstream provider.

    Now we can finally circle back on why the Retail ISP would decline to offer end-users some faster upload speeds. Simply put, the Retail ISP may be aware that even if they offer higher upload, most residential customers won’t really take advantage of it, even if it was a free upgrade. This is the reality of residential Internet traffic. Indeed, the unique ISPs in the USA offering residential 10 Gbps connections have to be thoroughly aware that even the most dedicated of, err, Linux ISO afficionados cannot saturate that connection for more than a few hours per month.

    But if most won’t take advantage of it, then that shouldn’t impact the Retail ISP’s burstable contract with the Upstream ISP, and so it’s a free choice, right? Well, yes, but it’s not the only consideration. The thing about offering more upload is that while most customers won’t use it, a small handful will. And maybe those customers are the type that will complain loudly if the faster upload isn’t honored. And that might hurt Retail ISP’s reputation. So rather than take that gamble through guaranteeing faster upload for residential connections, they’d prefer to just make it “best effort”, whatever that means.

    EDIT: The description above sounds a bit defeatist for people who just want faster upload, since it seems that ISPs just want to do the bare minimum and not cater to users who are self-hosting, whom ISPs believe to be a minority. So I wanted to briefly – and I’m aware that I’m long winded – describe what it would take to change that assumption.

    Essentially, existing “average joe” users would have to start uploading a lot more than they are now. With so-called cloud services, it might seem that upload should go up, if everyone’s photos are stored on remote servers. But cloud services also power major sites like Netflix, which are larger download sources. So net-net, I would guess that the residential customer’s download-to-upload ratio is growing wider, and isn’t shrinking.

    It would take a monumental change in networking or computer or consumer demand to reverse this tide. Example: a world where data sovereignty – bonafide ownership of your own data – is so paramount that everyone and their mother has a social-media server at home that mutually relays and amplifies viral content. That is to say, self-hosting and upload amplification.


  • Historically, last-mile technologies like dial-up, DSL, satellite, and DOCSIS/cable had limitations on their uplink power. That is, the amount of energy they can use to send upload through the medium.

    Dial-up and DSL had to comply with rules on telephone equipment, which I believe limited end-user equipment to less power than what the phone company can put onto the wires, premised on the phone company being better positioned to identify and manage interference between different phone lines. Generally, using reduced power reduces signal-to-noise ratio, which means less theoretical and practical bandwidth available for the upstream direction.

    Cable has a similar restriction, because cable plants could not permit end-user “back feeding” of the cable system. To make cable modems work, some amount of power must be allowed to travel upstream, but too much would potentially cause interference to other customers. Hence, regulatory restrictions on upstream power. This also matched actual customer usage patterns at the time.

    Satellite is more straightforward: satellite dishes on earth are kinda tiny compared to the bus-sized satellite’s antennae. So sending RF up to space is just harder than receiving it.

    Whereas fibre has a huge amount of bandwidth, to the point that when new PON standards are written, they don’t even bother reusing the old standard’s allocated wavelength, but define new wavelengths. That way, both old and new services can operate on the fibre during the switchover period. So fibre by-default allocates symmetrical bandwidth, although some PON systems might still be closer to cable’s asymmetry.

    But there’s also the backend side of things: if a major ISP only served residential customers, who predominantly have asymmetric traffic patterns, then they will likely have to pay money to peer with other ISPs, because of the disparity. Major ISPs solve this by offering services to data centers, which generally are asymmetric but tilted towards upload. By balancing residential with server customers, the ISP can obtain cheaper or even free peering with other ISPs, because symmetrical traffic would benefit both and improve the network.


  • Looking at the diagram, I don’t see any issue with the network topology. And the power arrangement also shouldn’t be a problem, unless you require the camera/DVR setup to persist during a power cut.

    In that scenario, you would have to provide UPS power to all of: the PoE switch, the L3 switch, and the NVR. But if you don’t have such a requirement, then I don’t see a problem here.

    Also, I hope you’re doing well now.




  • Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other’s endianness, but this was deemed to be even more confusing haha



  • Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

    But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn’t have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.


  • To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.

    Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.

    At a minimum, for an isolated VLAN that requires internet access, you would have to

    • define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
    • advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
    • route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
    • and finally enable firewalling

    As a reminder, NAT and NAT66 are not firewalls.



  • Re: 2.5 Gbps PCIe card

    In some ways, I kinda despise the 802.3bz specification for 2.5 and 5 Gbps on twisted pair. It came into existence after 10 Gbps twisted-pair was standardized, and IMO exists only as a reaction to the stubbornly high price of 10 Gbps ports and the lack of adoption – 1000 Mbps has been a mainstay and is often more than sufficient.

    802.3bz is only defined for twisted pair and not fibre. So there aren’t too many xcvrs that support it, and even fewer SFP+ ports will accept such xcvrs. As a result, the cheap route of buying an SFP+ card and a compatible xcvr is essentially off-the-table.

    The only 802.3bz compatible PCIe card I’ve ever personally used is an Aquantia AQN-107 that I bought on sale in 2017. It has excellent support in Linux, and did do 10 Gbps line rate by my testing.

    That said, I can’t imagine that cards that do only 2.5 Gbps would somehow be less performant. 2.5 Gbps hardware is finding its way into gaming motherboards, so I would think the chips are mature enough that you can just buy any NIC and expect it to work, just like buying a 1000 Mbps NIC.

    BTW, some of these 802.3bz NICs will eschew 10/100 Mbps support, because of the complexity of retaining that backwards compatibility. This is almost inconsequential in 2024, but I thought I’d mention it.


  • I’ve only looked briefly into APC/UPC adapters, although my intention was to do the opposite of your scenario. In my case, I already had LC/UPC terminated duplex fibre through the house, and I want to use it to move my ISP’s ONT closer to my networking closet. That requires me to convert the ISP’s SC/APC to LC/UPC at the current terminus, then convert it back in my wiring closet. I hadn’t gotten past the planning stage for that move, though.

    Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn’t terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you’d have to be handy and careful with the pull forces allowed on a fibre.

    But I digress. On the xcvr choice, I don’t have any recommendations, as I’m on mobile. But one avenue is to look at a reputable switch manufacturer and find their xcvr list. The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.


  • In my first draft of an answer, I thought about mentioning GPON but then forgot. But now that you mention it, can you describe if the fibres they installed are terminated individually, or are paired up?

    GPON uses just a single fibre for an entire neighborhood, whereas connectivity between servers uses two fibres, which are paired together as a single cable. The exception is for “bidirectional” xcvrs, which like GPON use just one fibre, but these are more of a stopgap than something voluntarily chosen.

    Fortunately, two separate fibres can be paired together to operate as if they were part of the same cable; this is exactly why the LC and SC connectors come in a duplex (aka side-by-side) format.

    But if the ISP does GPON, they may have terminated your internal fibre run using SC, which is very common in that industry. But there’s a thing with GPON specifically, where the industry has moved to polishing the fiber connector ends with an angle, known as Angled Physical Contact (APC) and marked with green connectors, versus the older Ultra Physical Contact (UPC) that has no angle. The benefit of APC is to reduce losses in the ISP’s fibre plant, which helps improve services.

    Whereas in data center and networking, I have never seen anything but UPC, and that’s what xcvrs will expect, with tiny exceptions or if they’re GPON xcvrs.

    So I need to correct my previous statement: to be fully functional as designed, the fiber and xcvr must match all of: wavelength, mode, connector, and the connector’s polish.

    The good news is that this should mostly be moot for your 30 meter run, since the extra losses from mismatched polish should still link up.

    As for that xcvr, please note that it’s an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.


  • Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future. Take 100 Gbps xcvrs as an example: some use just the single pair (2 fibres total) to do 100 Gbps, but others use four pairs (8 fibres total) driving each at just 25 Gbps.

    The latter are invariably cheaper to build, because 25 Gbps has been around for a while now; they’re just shoving four optical paths into one xcvr module. But 100 Gbps on a single fiber pair? That’s going to need something like DWDM which is both expensive and runs into fibre bandwidth limitations, since a single mode fibre is only single-mode for a given wavelength range.

    So unless the single pair of fibre is the highest class that money can buy, cost and technical considerations may still make multiple multimode fibre cables a justifiable future-looking option. Multiplying fibres in a cable is likely to remain cheaper than advancing the state of laser optics in severely constrained form factors.

    Naturally, a multiple single-mode cable would be even more future proofed, but at that point, just install conduit and be forever-proofed.


  • Starting with brass tacks, the way I’m reading the background info, your ISP was running fibre to your property, and while they were there, you asked them to run an additional, customer-owned fibre segment from your router (where the ISP’s fibre has landed) to your server further inside the property. Both the ISP segment and this interior segment of fibre are identical single-mode fibres. The interior fibre segment is 30 meters.

    Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.

    With the fibre’s wavelength, you can then search online for transceivers (xcvrs) that match that wavelength and the connector type. Common connectors in a data center include LC duplex (very common), SC duplex (older), and MPO (newer). 1310 and 1550 nm are common single mode wavelengths, and 850 and 1300 nm are common multimode wavelengths. But other numbers are used; again, do not rely solely on jacket color. Any connector can terminate any mode of fibre, so you can’t draw any conclusions there.

    For the xcvr to operate reliably and within its design specs, you must match the mode, wavelength, and connector (and its polish). However, in a homelab, you can sometimes still establish link with mismatching fibres, but YMMV. And that practice would be totally unacceptable in a commercial or professional environment.

    Ultimately, it boils down to link losses, which are high if there’s a mismatch. But for really short distances, the xcvrs may still have enough power budget to make it work. Still, this is not using the device as intended, so you can’t blame them if it one day stops working. As an aside, some xcvrs prescribe a minimum fibre distance, to prevent blowing up the receiver on the other end. But this really only shows up on extended distance, single mode xcvrs, on the order of 40 km or more.

    Finally, multimode is not dead. Sure, many people believe it should be deprecated for greenfield applications. I agree. But I have also purchased multimode fibre for my homelab, precisely because I have an obscene number of SFP+ multimode, LC transceivers. The equivalent single mode xcvrs would cost more than $free so I just don’t. Even better, these older xcvrs that I have are all genuine name-brand, pulled from actual service. Trying to debug fibre issues is a pain, so having a known quantity is a relief, even if it means my fibre is “outdated” but serviceable.