Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Post

Replies

Boosts

Views

Activity

How to control continuous movement by long pressing on the GameController
struct GameSystem: System { static let rootQuery = EntityQuery(where: .has(GameMoveComponent.self) ) init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let root = context.scene.performQuery(Self.rootQuery) for entity in root{ let game = entity.components[GameMoveComponent.self]! if let xMove = game.game.gc?.extendedGamepad?.dpad.xAxis.value , let yMove = game.game.gc?.extendedGamepad?.dpad.yAxis.value { print("x:\(xMove),y:\(yMove)") let x = entity.transform.translation.x + xMove * 0.01 let y = entity.transform.translation.z - yMove * 0.01 entity.transform.translation = [x , entity.transform.translation.y , y] } } } } I want to use the game controller's direction keys to control the continuous movement of Entity in visionOS. When I added a query for handle button presses in the ECS System, I found that the update interface was not called at a frequency of 30 frames per second. Instead, it executes once when I press or release the key. Is this what is the reason? I want to keep moving by holding down the controller button, is there a better solution? I hope this moving process will be smooth and not stuck.
1
0
213
Jul ’24
Floor stability with physics simulations
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too. However, I also want to update the mesh with versions from new scans and that make all my cubes jump. Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height. Any thoughts or ideas appreciated /Andreas
1
0
341
Jul ’24
I referred to the Enhanced Sensor Access code from WWDC24 to display the main camera of Vision Pro in the application interface, but it is not displaying
this is my code: import Foundation import ARKit import SwiftUI class CameraViewModel: ObservableObject { private var arKitSession = ARKitSession() @Published var capturedImage: UIImage? private var pixelBuffer: CVPixelBuffer? private var cameraAccessAuthorizationStatus = ARKitSession.AuthorizationStatus.notDetermined func startSession() { guard CameraFrameProvider.isSupported else { print("Device does not support main camera") return } Task { await requestCameraAccess() guard cameraAccessAuthorizationStatus == .allowed else { print("User did not authorize camera access") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) let cameraFrameProvider = CameraFrameProvider() print("Requesting camera authorization...") let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessAuthorizationStatus = authorizationResult[.cameraAccess] ?? .notDetermined guard cameraAccessAuthorizationStatus == .allowed else { print("Camera data access authorization failed") return } print("Camera authorization successful, starting ARKit session...") do { try await arKitSession.run([cameraFrameProvider]) print("ARKit session is running") guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { print("Unable to get camera frame updates") return } print("Successfully got camera frame updates") for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { print("Unable to get main camera sample") continue } print("Successfully got main camera sample") self.pixelBuffer = mainCameraSample.pixelBuffer } DispatchQueue.main.async { self.capturedImage = self.convertToUIImage(pixelBuffer: self.pixelBuffer) if self.capturedImage != nil { print("Successfully captured and converted image") } else { print("Image conversion failed") } } } catch { print("ARKit session failed to run: \(error)") } } } private func requestCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessAuthorizationStatus = authorizationResult[.cameraAccess] ?? .notDetermined if cameraAccessAuthorizationStatus == .allowed { print("User granted camera access") } else { print("User denied camera access") } } private func convertToUIImage(pixelBuffer: CVPixelBuffer?) -> UIImage? { guard let pixelBuffer = pixelBuffer else { print("Pixel buffer is nil") return nil } let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) { return UIImage(cgImage: cgImage) } print("Unable to create CGImage") return nil } } this my log: User granted camera access Requesting camera authorization... Camera authorization successful, starting ARKit session... ARKit session is running Successfully got camera frame updates void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
0
0
242
Jul ’24
ARKit tracked images, best practices
I'm developing an augmented images app using ARKit. The images themselves are sourced online. The app is mostly done and working fine. However, I download the images the app will be tracking every time the app starts up. I'd like to avoid this by perhaps downloading the images and storing them to the device. My concern is that as the number of images grow, the app would download too many images to the device. I'd like some thoughts on how to best approach this. For example, should I download and store some of the images in CoreData, or perhaps not store them at all?
1
0
223
Jul ’24
RealityView not displaying content
I'm playing with visionOS and trying to get a usdz file to load in a RealityView. It works fine if I use a Model3D but if I use a RealityView nothing shows up. I'm just using the fender_stratocaster asset right off the apple web site so it seems like it should work. This is the code: RealityView { content in if let sphereEntity = try? await Entity(named: "fender_stratocaster") { content.add(sphereEntity) sphereEntity.position = [0,0,0] sphereEntity.transform.scale = [scale, scale, scale] let _ = print(sphereEntity) } } update: { content in if let sphereEntity = content.entities.first { sphereEntity.transform.scale = [scale, scale, scale] } Any clues as to why this is not showing would be appreciated.
3
0
282
Jul ’24
VisionOS access ARKit when in shared space
I was planning to experiment with ARKit for Vision OS to create a widget app that places small room persistent objects in the user room, which the user can anchor anywhere they like. Trouble is, I don’t find it an amazing experience the fact that this needs to be used in a full space, as it’s limiting. those types of widgets would make sense only when one want to glance at them quickly, not as part of the main task a user is performing. Is there any way the room positional anchors can be stored and reestablished any time somebody opens an app in the shared space, rather than in the full one?
1
0
245
Jul ’24
AR Kit Body Skeleton is terrible, right?
Hello, I am trying to create new outfits based on ar kits body tracking skeleton example - the controlled robot. Is it just me or is this skeleton super annoying to work with? The bones all stand out like thorns and don't follow along the actual limb, which makes it impossible to automatically weight paint new meshes to the skeleton. Changing the bones is also not possible, since this will result in a distorted body tracking. I am an experienced modeller but I have never seen such a crazy skeleton. Even simple meshes are a pain in the bud to pair with these bones. You basically have to weight paint everything manually. Or am I missing something?
0
0
231
Jul ’24
Object Tracking Sample Code
Are you planning on publishing a complete sample code project related to the Explore object tracking for visionOS session (wwdc2024/10101)? The animation at 12:50 where the globe opens up was especially impressive. Seeing how that was done while tracking to the globe would be very interesting. (I realize that we would have to create our own globe object in order for the code to work.)
2
0
349
Jun ’24
Roomplan + Object Capture
We have an issue with Apple Roomplan - on regular bases the objects which are captured are not positioned corretly in the model which happens 50% of the cases we have - that makes the feature almost useless. Is there any idea how to solve that problem?
0
0
215
Jul ’24
Attach a Attachment to the hand VisionOS
I am trying to attach a button to user's left hand. the position is tracked. the button stays above the user's left hand. but it doesn't face the user. or it doesn't even face where the wrist is pointing. this is the main code snippet: if (model.editWindowAdded) { let originalMatrix = model.originFromWristLeft let theattachment = attachments.entity(for: "sample")! entityDummy.addChild(theattachment) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312, entityDummy.orientation.imag.y, 0.025926698)) entityDummy.orientation = testrotvalue theattachment.position = [0, 0.1, 0] var timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {_ in let originalMatrix = model.originFromWristLeft print(originalMatrix.columns.0.y) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312,0.1, 0.025926698)) entityDummy.orientation = testrotvalue } }
2
0
339
Jul ’24
ObjectCapture and ARObjectAnchor
Is it possible to both capture the images required for ObjectCapture and the scan data required to create an ARObjectAnchor (and be able to align the two to each other)? Perhaps an extension of this WWDC 2020 example that also integrates usdz object capture (instead of just import external one)? https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/scanning_and_detecting_3d_objects?changes=_2
2
0
352
Jul ’24
visionOS2.0 main camera image fusion
I want to align and fuse the video streams from the main camera and my external camera in visionOS 2.0, ensuring that the fused image remains directly in front of the field of view as the head moves, similar to a normal passthrough mode video image. I have already achieved and verified the static image alignment and fusion on the Vision Pro using screenshots from the main camera and the external video stream. However, I don't know how to perform real-time fusion with the main camera images. Could you please advise on how I can achieve this?
1
0
212
Jul ’24
Where does the device anchor locate?
Hi, My goal is to obtain the device location (6 DoF) of the Apple Vision Pro and I find a function that might satisfy my need: final func queryDeviceAnchor(atTimestamp timestamp: TimeInterval) -> DeviceAnchor? which returns a device anchor (containing the position and orientation of the headset). However, I couldn't find any document specify where does the device anchor exactly locate on the headset. Does it locate at the midpoint between the user's eyes? Does it locate at the centroid of the six world facing tracking cameras? It would be really helpful if someone can provide a local transformation matrix (similar to a camera extrinsic) from a visible rigid component (say the digital crown, top button, or the laser scanner) to the device anchor. Thanks.
1
1
291
Jul ’24
How to save point cloud data and view it
Hello, Recently, I’ve been studying point cloud development. As a beginner in this field, I’m seeking some guidance on how to approach this. I want to obtain point cloud data and be able to display and view it as shown in the picture below. I used the "Displaying a Point Cloud Using Scene Depth" code as my starting point and then attempted to add a button to save the data in the Renderer private variable particlesBuffer: MetalBuffer. The particlesBuffer array contains data structured as follows: struct ParticleUniforms { simd_float3 position; simd_float3 color; float confidence; }; My understanding is that this data represents the point cloud. If I am wrong, please let me know. Next, I wrote my own code to display this data using SceneKit by creating small spheres based on the position and color values. However, in practice, this method only allows me to display about 30,000 spheres before it becomes very laggy. I believe my implementation might be incorrect because I noticed that using the 3D scanner App to display point clouds achieves a much better performance. Could you please advise me on how to achieve an effect like the one shown in the image below? Thank you.
0
0
203
Jul ’24
Object Tracking with RealtyView
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code: RealityView { content in if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(model) } } Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
2
0
440
Jun ’24
Can you limit the area for AR Body Tracking to avoid the mesh from jumping across multiple bodies?
Hello there, I am currently experimenting with the body tracking sample from the AR Foundation Example Project. It works fine, but when there are multiple persons in front of the camera, the mesh jumps randomly from tracked skeleton to tracked skeleton. So I am looking for a way to define a specific area in front of the camera or to implement some other marker (maybe a body pose) to start and stop tracking. I am guessing Pose-Tracking could work. Whenever a body stands in a T-Pose, the Mesh gets applied to that body and then the script stops looking for new skeletons until the original skeleton is lost. Does somebody know which code to look at to achieve this?
1
0
229
Jul ’24