Team-scoped keys introduce the ability to restrict your token authentication keys to either development or production environments. Topic-specific keys in addition to environment isolation allow you to associate each key with a specific Bundle ID streamlining key management.
For detailed instructions on accessing these features, read our updated documentation on establishing a token-based connection to APNs.
Delve into the world of built-in app and system services available to developers. Discuss leveraging these services to enhance your app's functionality and user experience.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In MapKit, the MKAnnotation takes a CLLocationCoordinate2D. However, in 3D/Flyover mode, the user marker has a height position on the map.
We are currently plotting points which have altitude, speed, heading, etc, and I have a method for creating a CLLocation with this information. What I'm trying to figure out is if there's a way to pass that information along to the MapKit rendering engine / annotations / AnnotationViews to recognize and show when in 3D mode. Is there any support for that currently?
We are observing a reproducible issue on some (not all) iPad models equipped with A16, where BLE streaming from multiple peripherals at ≥33–40 Hz results in uneven packet distribution, burst delivery, and application-level lag.
The same application, peripherals, firmware, iOS version, and physical environment do not exhibit this behaviour on A14-based iPads (iPad 10).
Affected Hardware:
• iPad 11" with A16
• iOS versions: identical across tested devices
• Issue affects some devices of the same model, not all
Internal field data
• ~25 affected
• ~5 unaffected
• Customers actively prefer iPad 10 (A14) due to stability
When two or more BLE peripherals stream data concurrently at frequencies ≥33–40 Hz, affected iPads exhibit:
• Uneven packet arrival timing
• Burst delivery instead of uniform intervals
• Increasing latency over time
• Observable application-level lag
This does not present as simple packet loss. Instead, packets arrive in clusters, breaking real-time assumptions.
At ≤30–33 Hz, the issue does not reproduce.
We tested:
• One affected iPad 11
• One unaffected iPad 11
• Same iOS version
• Same app build
• Same peripherals
• Same firmware
• Same physical location
• Same Wi-Fi state
Only the affected device reproduces the issue.
This rules out:
• App logic
• Peripheral firmware
• iOS version
• Environmental RF noise
• Wi-Fi coexistence configuration
Evidence Available
We can provide:
• Screenshots from a minimal test app showing packet counts
• CSV files of packet timestamps
• Source code for the BLE test app
• Side-by-side comparison logs (affected vs unaffected device)
All evidence is from the same app, built solely to measure packet timing.
Additional Technical Notes
• Issue persists after factory reset
• Occurs without third-party BLE libraries (CoreBluetooth only)
• Occurs regardless of foreground/background state
• Not correlated with MTU size
• Appears threshold-based (~33–40 Hz)
• Appears device-specific, not model-wide
Topic:
App & System Services
SubTopic:
Networking
I am creating an AppIntent to be used with Shortcuts and I would like to return a flexible dictionary of values with nested structures. As far as I understand the custom AppEntity only uses the displayRepresentation to store a title and subtitle which are LocalizedStringResource. types. Although I can convert my dictionary into a string I found no way in shortcuts to be able to retrieve the original structure of it and inspect individual elements like in subsequent actions. Is there a way to do this?
Thank you in advance
Nick Karanatsios
iOS 26.2 Beta 23C5044b
Phone's battery reports 0% Health, leaving it on multiple chargers (high/low wattage) doesn't change anything. Icon changes to charging.
Changed the battery with a high quality aftermarket of ~70% charge, same issue.
Unable to remove beta or update to RC2 due to the 20% minimum required... not sure what to do.
This is just an FYI in case someone else runs into this problem.
This afternoon (12 Dec 2025), I updated to macOS 26.2 and lost my network.
The System Settings' Wi-Fi light was green and said it was connected, but traceroute showed "No route to host".
I turned Wi-Fi on & off.
I rebooted the Mac.
I rebooted the eero network.
I switched to tethering to my iPhone.
I switched to physical ethernet cable.
Nothing worked.
Then I remembered I had a beta of an app with a network system extension that was distributed through TestFlight.
I deleted the app, and networking came right back.
I had this same problem ~2 years ago. Same story:
app with network system extension + TestFlight + macOS update = lost network.
(My TestFlight build might have expired, but I'm not certain)
I don't know if anyone else has had this problem, but I thought I'd share this in case it helps.
What is the best way to detect if the Wifi is being used for Wireless Carplay or is just a normal network interface?
A user of one of my apps reported that since the update to macOS 26.1 they are no longer able to scan Macintosh HD: the app used to work, but now always reports that Macintosh HD contains a single empty file named .nofollow, or rather the path is resolved to /.nofollow.
Initially I thought this could be related to resolving the file from the saved bookmark data, but even restarting the app and selecting Macintosh HD in an open panel (without resolving the bookmark data) produces the same result.
The user tried another app of mine with the same issue, but said that they were able to scan Macintosh HD in other App Store apps. I never heard of this issue before and my apps have been on the App Store for many years, but it looks like I might be doing something wrong, or the APIs that I use are somehow broken. In all my apps I currently use getattrlistbulk because I need attributes that are not available as URLResourceKey in all supported operating system versions.
What could be the issue? I'm on macOS 26.1 myself and never experienced it.
I'm implementing the PushToTalk framework and have encountered an issue where channelManager(_:didActivate:) is not called under specific circumstances.
What works:
App is in foreground, receives PTT push → didActivate is called ✅
App receives audio in foreground, then is backgrounded → subsequent pushes trigger didActivate ✅
What doesn't work:
App is launched, user joins channel, then immediately backgrounds
PTT push arrives while app is backgrounded
incomingPushResult is called, I return .activeRemoteParticipant(participant)
The system UI shows the speaker name correctly
However, didActivate is never called
Audio data arrives via WebSocket but cannot be played (no audio session)
Setup:
Channel joined successfully before backgrounding
UIBackgroundModes includes push-to-talk
No manual audio session activation (setActive) anywhere in my code
AVAudioEngine setup only happens inside didActivate delegate method
Issue persists even after channel restoration via channelDescriptor(restoredChannelUUID:)
Question:
Is this expected behavior or a bug? If expected, what's the correct approach to handle
incoming PTT audio when the app is backgrounded and hasn't received audio while in the
foreground yet?
I try saving the value of NSURLVolumeURLKey when creating a bookmark. When resolving the bookmark later I pass the bookmark data to +resourceValuesForKeys:fromBookmarkData: but always pull out "file:///" which is wrong.
In contrast the value for NSURLVolumeURLForRemountingKey is properly saved and restored using the same method.
For other permission prompts in the iOS ecosystem, we have the option to configure the text shown in the prompt via keys in the Info.plist. This does not appear to be the case with regards to the age range permission prompt. The text of the prompt implies the app includes a differentiated experience for child or teen content and that confirming age unlocks more features (making it seem optional for using the app).
Is there a plan for app developers to be able to update that permission prompt similarly to how we can configure others?
If so, is there any timeline we can expect that on?
Testing Environment:
iOS: 26.0 Beta 7
Xcode: Beta 6
Description:
We are implementing the new BGContinuedProcessingTask API introduced in iOS 26. We have followed the official documentation and WWDC session guidance to configure our project.
The Background Modes (processing) and Background GPU Access capabilities have been added in Xcode.
The com.apple.developer.background-tasks.continued-processing.gpu entitlement is present and set to in the .entitlements file.
The provisioning profile details viewed within Xcode explicitly show that the "Background GPU Access" capability and the corresponding entitlement are included.
Despite this correct configuration, when running the app on supported hardware (iPhone 16 Pro), a call to BGTaskScheduler.supportedResources.contains(.gpu) consistently returns false.
This prevents us from setting request.requiredResources = .gpu. As a result, when the BGContinuedProcessingTask starts without the GPU resource flag, our internal Metal-based exporter attempts to access the GPU and is terminated by the system, throwing an IOGPUMetalError: Insufficient Permission (to submit GPU work from background).
We have performed extensive debugging, including a full reset of the provisioning profile (removing/re-adding capabilities, toggling automatic signing, cleaning build folders, and reinstalling the app), but the issue persists. This strongly suggests a bug in the iOS 26 beta where the runtime is failing to correctly validate a valid entitlement.
Additionally, we've observed inconsistent behavior across devices. On an A16-based iPad, the task submits successfully (BGTaskScheduler.submit does not throw an error), but the launch handler is never invoked by the system. On the iPhone 16 Pro, the handler is invoked, but we encounter the supportedResources issue described above. This leads us to ask for clarification on the exact hardware requirements for this feature. We hypothesize that it may be limited to devices that support Apple Intelligence (A17 Pro and newer). Could you please confirm this and provide official documentation on the device support criteria?
Steps to Reproduce:
Create a new Xcode project.
In Signing & Capabilities, add "Background Modes" (with "Background processing" checked) and "Background GPU Access".
Add a permitted identifier (e.g., "com.company.test.*") to BGTaskSchedulerPermittedIdentifiers in Info.plist.
In application(_:didFinishLaunchingWithOptions:) or a ViewController's viewDidLoad, log the result of BGTaskScheduler.shared.supportedResources.contains(.gpu).
Build and run on a physical, supported device (e.g., iPhone 16 Pro).
Expected Results:
The log should indicate that BGTaskScheduler.shared.supportedResources.contains(.gpu) returns true.
Actual Results:
The log shows that BGTaskScheduler.shared.supportedResources.contains(.gpu) returns false.
We are using AgeRangeService.requestAgeRange(ageGates:in:) with an age gate of 18 to verify adult users.
The system prompt always displays the lower-bound wording (“17 or Younger”), even when the app’s requirement is to verify users who are 18 or older. We understand the UI is system-controlled; however, this wording causes confusion for users, QA, and product teams, as it appears to indicate a child-only flow even when requesting adult verification.
Based on the demonstration video, it appears that they have another more coherent message.
In Apple's example, it is different, and it is correct that we need to specify 18 years or older in the implementation.
A little more context might be helpful, but we are creating a kind of wrapper with React Native that receives that value as a parameter, which is 18.
We attempted to run a burn-in test while connected to our MacBook Pro M4 Max, but this crashed about 10 minutes into testing.
We tried to run a 2-hour burn-in on M4 Max host while charging the battery from below 5%, running six bus-powered drives (via ATTO/Black Magic/IOmeter), hitting the RJ45 port for 2.5Gbps (via JPerf), and streaming at least 4K60Hz video content to two display, however, M4 Max will crashed in 20 [minutes.](
https://www.example.com/)
Topic:
App & System Services
SubTopic:
Core OS
I can reproduce the bug that CallKit doesn't active audiosession after the outgoing call put on hold because of an incoming call.
VoIP calling with CallKit
Steps to reproduce:
Download SpeakerBox example app from the link above and start it with XCode
Start a new outgoing call
Call your phone from other phone
Hold and Accept the call
After a few secs finish the call from the other phone
The outgoing call will be still on hold
Click on the call and click Toggle Hold
The call won't be active again because the audiosession is activated.
Logs:
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Requested transaction successfully
Starting audio
Type: stdio
AURemoteIO.cpp:1162 failed: 561017449 (enable 3, outf< 1 ch, 44100 Hz, Float32> inf< 1 ch, 44100 Hz, Float32>)
Type: Error | Timestamp: 2024-08-15 12:20:29.949437+02:00 | Process: Speakerbox | Library: libEmbeddedSystemAUs.dylib | Subsystem: com.apple.coreaudio | Category: aurioc | TID: 0x19540d
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error 561017449
Type: Error | Timestamp: 2024-08-15 12:20:29.949619+02:00 | Process: Speakerbox | Library: AVFAudio | Subsystem: com.apple.avfaudio | Category: avae | TID: 0x19540d
Couldn't start Apple Voice Processing IO: Error Domain=com.apple.coreaudio.avfaudio Code=561017449 "(null)" UserInfo={failed call=err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)}
Type: Notice | Timestamp: 2024-08-15 12:20:29.949730+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Route change:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167498+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
ReasonUnknown
Type: Notice | Timestamp: 2024-08-15 12:20:30.167549+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Previous route:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167568+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
<AVAudioSessionRouteDescription: 0x302c00bc0,
inputs = (
"<AVAudioSessionPortDescription: 0x302c01330, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = (null)>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x302c004d0, type = Receiver; name = Vev\U0151; UID = Built-In Receiver; selectedDataSource = (null)>"
)>
Type: Notice | Timestamp: 2024-08-15 12:20:30.167626+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code in a conda environment that requires lots of RAM and sure enough, once physical memory is almost exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process (I can see this through get info) however, once about 40 or so swapfiles are created, for a total of about 40GB of virtual memory occupied (and thus still plenty of available space in VM), zsh kills the python process responsible for the RAM usage (notably, it does not kill another python process using only about 100 MB of RAM). The message received is "zsh: killed" in the tmux pane where the logging of the process is printed.
All the documentation I was able to consult says that macOS is designed to use up to all available storage on the startup disk (which is the one I am using since I have only one disk and the available space aforementioned reflects this) for swapping, when physical RAM is not enough. Then why is the process killed long before the swapping area is exhausted? In contrast, the same process on a Linux machine (basic python venv here) just keeps swapping, and never gets killed until swap area is exhausted.
One last note, I do not have administrator rights on this device, so I could not run dmesg to retrieve more precise information, I can only check with df -h how the swap area increases little by little. My employer's IT team confirmed that they do not mess with memory usage on managed profiles, so macOS is just doing its thing.
Thanks for any insight you can share on this issue, is it a known bug (perhaps with conda/python environments) or is it expected behaviour? Is there a way to keep the process from being killed?
Topic:
App & System Services
SubTopic:
Core OS
Recently I noticed an app called “Lookus”. Even if I force‑kill it, it still seems to obtain information such as my charging status and network status, and it can even send real‑time notifications. I’m curious how this is technically possible. Does anyone know how this could be achieved?
Hello team,
I am developing a security app where I am denying certain flows/packets if the are communicating with known malicious endpoints. Therefore I want to make use of NetworkExtensions such as the new URLFilter or ContentFilter (NEURLFilterManager, NEFilterDataProvider, NEFilterControlProvider).
Does NEURLFilterManager require the user's device to be at a minimun of ios 26?
Does any of these APIs/Extensions require the device to be managed/supervised or can it be released to all consumers?
Thanks,
Topic:
App & System Services
SubTopic:
Networking
Hello,
I have an app that is using iOS 26 Network Framework APIs.
It is using QUIC, TLS 1.3 and Bonjour. For TLS I am using a PKCS#12 identity.
All works well and as expected if the devices (iPhone with no cellular, iPhone with cellular, and iPad no cellular) are all on the same wifi network.
If I turn off my router (ie no more wifi network) and leave on the wifi toggle on the iOS devices - only the non cellular iPhone and iPad are able to discovery and connect to each other. My iPhone with cellular is not able to.
By sharing my logs with Cursor AI it was determined that the connection between the two problematic peers (iPad with no cellular and iPhone with cellular) never even makes it to the TLS step because I never see the logs where I print out the certs I compare.
I tried doing "builder.requiredInterfaceType(.wifi)" but doing that blocked the two non cellular devices from working. I also tried "builder.prohibitedInterfaceTypes([.cellular])" but that also did not work.
Is AWDL on it's way out? Should I focus my energy on Wi-Fi Aware?
Regards,
Captadoh
Greetings.I have an app today that uses multipeer connectivity extensively. Currently, when the user switches away from the app, MPC disconnects the session(s) - this is by design apparently (per other feedback). I'd like to hear if anyone has experimented with iOS9 multitasking / multipeer and whether MPC sessions can stay alive?Thanks
Topic:
App & System Services
SubTopic:
Core OS
Tags:
Background Tasks
Multipeer Connectivity
Core Bluetooth
From time to time the subject of NECP grows up, both here on DevForums and in DTS cases. I’ve posted about this before but I wanted to collect those tidbits into single coherent post.
If you have questions or comments, start a new thread in the App & System Services > Networking subtopic and tag it with Network Extension. That way I’ll be sure to see it go by.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
A Peek Behind the NECP Curtain
NECP stands for Network Extension Control Protocol. It’s a subsystem within the Apple networking stack that controls which programs have access to which network interfaces. It’s vitally important to the Network Extension subsystem, hence the name, but it’s used in many different places. Indeed, a very familiar example of its use is the Settings > Mobile Data [1] user interface on iOS.
NECP has no explicit API, although there are APIs that are offer some insight into its state. Continuing the Settings > Mobile Data example above, there is a little-known API, CTCellularData in the Core Telephony framework, that returns whether your app has access to WWAN.
Despite having no API, NECP is still relevant to developers. The Settings > Mobile Data example is one place where it affects app developers but it’s most important for Network Extension (NE) developers. A key use case for NECP is to prevent VPN loops. When starting an NE provider, the system configures the NECP policy for the NE provider’s process to prevent it from using a VPN interface. This means that you can safely open a network connection inside your VPN provider without having to worry about its traffic being accidentally routed back to you. This is why, for example, an NE packet tunnel provider can use any networking API it wants, including BSD Sockets, to run its connection without fear of creating a VPN loop [1].
One place that NECP shows up regularly is the system log. Next time you see a system log entry like this:
type: debug
time: 15:02:54.817903+0000
process: Mail
subsystem: com.apple.network
category: connection
message: nw_protocol_socket_set_necp_attributes [C723.1.1:1] setsockopt 39 SO_NECP_ATTRIBUTES
…
you’ll at least know what the necp means (-:
Finally, a lot of NECP infrastructure is in the Darwin open source. As with all things in Darwin, it’s fine to poke around and see how your favourite feature works, but do not incorporate any information you find into your product. Stuff you uncover by looking in Darwin is not considered API.
[1] Settings > Cellular Data if you speak American (-:
[2] Network Extension providers can call the createTCPConnection(to:enableTLS:tlsParameters:delegate:) method to create an NWTCPConnection [3] that doesn’t run through the tunnel. You can use that if it’s convenient but you don’t need to use it.
[3] NWTCPConnection is now deprecated, but there are non-deprecated equivalents. For the full story, see NWEndpoint History and Advice.
Revision History
2025-12-12 Replaced “macOS networking stack” with “Apple networking stack” to avoid giving the impression that this is all about macOS. Added a link to NWEndpoint History and Advice. Made other minor editorial changes.
2023-02-27 First posted.