I inherited FortiNAC in a broken state. The engineer who set it up was no longer with the company, and from what I gathered, the chaos of that original deployment was a significant reason for their departure. What they left behind was a NAC system that technically functioned but created constant friction: inconsistent enforcement, poor compatibility with Juniper’s commit structure, and a wireless problem nobody could pin down.

That last one took the longest to crack.

For a long time we had intermittent wireless issues across the environment. Devices dropping, connectivity behaving strangely, nothing we could reliably reproduce. Then one day I caught it in the act and pulled a packet capture. The FortiNAC agent on the endpoint was causing the wireless adapter to spam DHCP requests for no apparent reason. The agent was flooding the client’s own connection. It had been doing this, intermittently, for years.

That packet capture was the end of FortiNAC for us.


Why Juniper NAC, and why then

We had already been in conversations with our Juniper account rep about Mist NAC. We were a Juniper shop; the entire access layer was already managed through Juniper Mist switching, so the integration story made sense. The timing aligned. The FortiNAC situation had crossed a line.

What I knew going in: Juniper Mist NAC was still in active development. The GUI wasn’t complete. There were capabilities we needed that weren’t in the product yet. I knew that before we started. The plan from day one was to work directly with Juniper to close the gaps, not to pretend they didn’t exist.

Our account rep arranged an NDA so I could work directly with the Juniper Mist NAC engineering team. Over a few weeks I met with them for about eight hours total. I walked them through what we needed from a production NAC, explained our operational requirements, and worked through the gaps one by one. Some features got added to the product. Others weren’t going to make it into the GUI in time, but the engineers could expose them through the API, which is all I needed.


The PowerShell module

The API documentation at the time was clearly written for internal use. Not hostile, just not built for an outside engineer trying to write production tooling against it. A lot of the work was trial and error: figuring out which endpoints returned what, how responses were structured, what combinations of calls gave me the cleanest data.

The module I built let my team interact with Juniper NAC without touching the GUI. Device labeling was the foundation; tag an endpoint with the right NAC label and policy assignment follows automatically. Client list management let us update and bulk-import device records. Then there was the MAC-to-location lookup, which ended up being the command everyone reached for first. Give it a MAC address, get back the switch port or wireless AP the device was last seen on. Across 300 access layer switches and all our wireless infrastructure. One command.

The bulk operations were what made the migration possible at the speed we needed. FortiNAC had years of records. Migrating them one at a time through a GUI would have taken days. The module let us pull the FortiNAC dataset, reformat it, and push everything into Juniper NAC at once.


The migration

The environment was about 300 access layer switches across all Crane sites. Because we were already managing the access layer through Juniper Mist, the migration had a real advantage: moving to Juniper NAC was mostly a matter of updating switch templates, not rebuilding configurations from scratch.

What couldn’t be staged was the 802.1X authentication config on the switches themselves. FortiNAC was on-prem; Juniper NAC is cloud-delivered. A switch config can only point at one RADIUS source at a time. There was no gradual rollout. It was always going to be a hard cutover.

So 90% of the work was preparation. We ran both systems side by side for data purposes: importing all records into Juniper NAC, validating labels, testing policy behavior in the lab, confirming every device class we cared about was accounted for. By the time I got final sign-off to go to production, I had about 80% of the migration plan complete and validated. The cutover itself took about a week.

A handful of endpoints came up with incorrect labels and got corrected once we found them. No user impact. No sites that had to go back to FortiNAC.


The CMMC piece

NAC isn’t an optimization for us, it’s a compliance requirement. Crane is a defense contractor operating under CMMC and NIST. The NAC is the physical enforcement layer: only authorized, authenticated devices get onto the network. It was a core part of the controls that put us at a near-perfect score from our CMMC auditors.

That’s part of why the migration had to be as clean as it was. There was no acceptable degraded-state period. Inheriting a broken NAC in that environment and leaving it broken wasn’t something I could sit on.


Where it stands

The Juniper Mist NAC GUI still doesn’t have everything we need. The PowerShell module is still in active daily use by the whole team. I’ve updated it several times since the original build, mostly to track API changes on Juniper’s side, occasionally because someone on the team thought of something useful to add. At this point it’s just how we work with NAC.

If I could change one thing it would be the timing. I wish we had moved off FortiNAC sooner. That product caused real pain for years, and once we were on Juniper NAC the difference was obvious immediately. No agent issues, no compatibility friction with our switching infrastructure, and when something needed fixing I had a direct line to the engineers building the product.

Vendor roadmaps are statements of intent. Whether you can actually get what you need depends on what you can build with what’s available, and whether you can work with the people on the other side.