قالب وردپرس درنا توس
Home / Tips and Tricks / ARKit 101: How to place a virtual TV and play a video on it in reality «Mobile AR News :: Next Reality

ARKit 101: How to place a virtual TV and play a video on it in reality «Mobile AR News :: Next Reality



In an earlier tutorial, we could place Mona Lisa on vertical surfaces like walls, books and displays with ARKit 1.5. By combining the power of the Scene Kit and Sprite Kit (Apple's 2D graphics engine) we can play a video on a flat surface in ARKit.

This tutorial teaches you how to make your magnified reality apps for iPads and iPhones using ARKit. Specifically, we are about how we can play a video on a 3D TV in ARKit.

What do you learn?

We learn how to play a video on a 2D plan using Scene Kit and Sprite

  • Minimum Requirements
  • Step 1: Download the assets you need

    To make it easier to follow this tutorial, I have created a folder with necessary 2D assets and swift file needed for the project. These files ensure you will not get lost in this guide, so download the zipped folder that contains the assets and extract it.

    Step 2: Set the AR project in Xcode

    If you are unsure how to do this follow Step 2 in our post on pilot of a 3D plan using hitTest to set in your AR project in Xcode. Be sure to give your project a different name, such as NextReality_Tutorial9 . Be sure to do a quick test run before proceeding with the instructions below.

    Step 3: Import assets to your project

    In the project navigator click on the "Assets.xcassets" folder. We add our 2D images there. Then right-click on the left pane in the area on the right side of the project navigator. Select "Import" and add "overlay_grid.png" from the Unzipped Assets folder.

    Right-click on the "art.scnassets" folder, where you will keep your 3D SceneKit format files. Then, select the option "Add files to" art.scnassets "". Then add the "tv.dae" file from the unzipped "Assets" folder you downloaded in step 1 above.

    Next, again in the project navigator, right-click on the yellow folder for "NextReality_Tutorial9" (or whatever your project name). Then select the Add Files option to NextReality_Tutorial9.

    Next, navigate to the "Unsed" folder and select the "Assets" file "Grid.swift". Be sure to check "Copy items if needed" and leave everything else as it is. Then click on "Add".

    This file helps to make a picture of a grid for each vertical plane ARKit detects. [19659014] Step 4: Use hitTest to place 3D TV on detected horizontal plane

    To quickly review ARKit's exploration options, take a look at our horizontal country review guide.

    Open the "ViewController.swift" class by double-clicking it. If you want to follow the final step 4 code, just open that link to view it on GitHub.

    In the file "ViewController.swift", change the scene in the viewDidLoad () ] method. Change it from:

      Let scene = SCNScene (named: "art.scnassets / ship.scn")! 

    To the following (which guarantees that we do not create a scene with the default shipping model):

      let scene = SCNScene () 

    Next, find this line at the top of the file:

      @IBOutlet was sceneView: ARSCNView ! 

    Add this row below that row to create a row of "Grid" for all vertical plans detected:

      where grids = [Grid] () 

    Copy and paste the following two methods listed below to the end of the file before the last curve (} ) in the file. These methods allow us to add our grid to the vertical plans detected by ARKit as a visual indicator.

      func renderer (_ renderer: SCNSceneRenderer, didAdd no: SCNNode, for anchor: ARAnchor) {
    guard let planet anchor = anchor like? ARPlaneAnchor else {return}
    let grid = grid (anchor: planet anchor)
    self.grids.append (grid)
    node.addChildNode (grid)
    }
    
    func renderer (_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let planet anchor = anchor like? ARPlaneAnchor else {return}
    let grid = self.grids.filter {grid in
    return grid.anchor.identifier == planeAnchor.identifier
    }.first
    
    guard song foundGrid = grid otherwise {
    return
    }
    
    FoundGrid.update (anchor: plane anchor)
    } 

    Let's quickly go over what happens in these two ways:

    1. didAdd () is called whenever a new node is added to ARSCNView . Here we add the grid image we imported to what level was detected.
    2. didUpdate () is called when newer ARPlaneAnchor nodes are detected or when the planet is expanded. In that case, we would like to update and expand our grid as well. We do this by calling update () on the specific Grid .

    Now let's enable feature points. Below this line in viewDidLoad () :

      sceneView.showsStatistics = true 

    Add the following:

      sceneView.debugOptions = ARSCNDebugOptions.showFeaturePoints 

    Then we turn on vertical landing. Under this line in viewWillAppear () :

      let configuration = ARWorldTrackingConfiguration () 

    Add the following:

      configuration.planeDetection =. Horizontal 

    This is very important! It will ensure that ARKit can detect horizontal planes in the real world. The function points allow us to see all 3D points ARKit can detect.

    Now run your app on your phone and walk around. Focus on a well-lit horizontal surface like the ground or a table; You should be able to see blue crystals appearing when a horizontal plane is detected:

    Then we add strings that allow [19659000] Let gestureRecognizer = UITapGestureRecognizer (goal: self, action: #selector
    sceneView.addGestureRecognizer (gestureRecognizer)

    Now we add dropped () which converts the 2D coordinate from the dropped location of our phones to a 3D coordinate using hitTest.

    Add this end of the file, but before the last console:

      @objc func tapped (gest: UITapGestureRecognizer) {
    // Get the 2D position for touch event on screen
    let touchPosition = gesture.location (in: sceneView)
    
    // Translate the 2D points to 3D points using hitTest (existing plan)
    let hitTestResults = sceneView.hitTest (touchPosition, types: .existingPlaneUsingExtent)
    
    guard let hitTest = hitTestResults.first else {
    return
    }
    addTV (hittest)
    } 

    Finally, add addTV () at the end of the file, but before the last console:

      func addTV (_ hitTestResult: ARHitTestResult) {
    let scene = SCNScene (named: "art.scnassets / tv.scn")!
    let tvNode = scene.rootNode.childNode (withName: "tv_node", recursive: true)
    tvNode? .position = SCNVector3 (hitTestResult.worldTransform.columns.3.x, hitTestResult.worldTransform.columns.3.y, hitTestResult.worldTransform.columns.3.z)
    self.sceneView.scene.rootNode.addChildNode (tvNode!)
    } 

    This method ensures that we add our 3D TV based on the 3D coordinate, calculated by hitTest. Run the app and tap a detected horizontal plane. You should now be able to watch a TV every time you press something like this:

    Checkpoint : Whole Your project at the end of this step should look like the last Step 4 code on my GitHub.

    Step 5: Play a video on our 3D TV!

    What is cooler than watching a video on phones? Watch a video in magnified reality on our phones! If you remember from our latest tutorial, we placed Mona Lisa on a wall. Let's use the same video from this tutorial and play it on our 3D TV.

    Let's import the video to our project. In the project navigator right-click on the yellow folder for "NextReality_Tutorial9" (or whatever your project name). Select the Add Files option to NextReality_Tutorial9. Choose to add the "video.mov" file (you should see something like that):

    Then, let's go back to our addTV () method.

    Right over this line:

      self.sceneView.scene.rootNode.addChildNode (tvNode!) 

    Add this code:

      let tvScreenPlaneNode = tvNode? .childNode (withName: "screen", recursively: true)
    let tvScreenPlaneNodeGeometry = tvScreenPlaneNode? .geometry axis! SCNPlane
    
    let tvVideoNode = SKVideoNode (filename: "video.mov")
    let videoScene = SKScene (size: .init (width: tvScreenPlaneNodeGeometry.width * 1000, height: tvScreenPlaneNodeGeometry.height * 1000))
    videoScene.addChild (tvVideoNode)
    
    tvVideoNode.position = CGPoint (x: videoScene.size.width / 2, y: videoScene.size.height / 2)
    tvVideoNode.size = videoScene.size
    
    let tvScreenMaterial = tvScreenPlaneNodeGeometry.materials.first (where: {$ 0.name == "video"})
    tvScreenMaterial? .diffuse.contents = videoScene
    
    tvVideoNode.play () 

    Here we import our video into a SKVideoNode and attached to the Scene Kit scene. We then set the correct size of this scene and attach it to our current TV code SCNNode . This ensures that our video scenes are connected to our TV. Then we play the video.

    Run the app again and now, after placing the TV, the video should start playing and see something like this:

    Checkpoint : Your entire project at the end of this step should look like the last Step 5 code on my GitHub.

    What We Have Done

    Success! Using the above steps, we could place a 3D TV in magnified reality and play a video on it with ARKit. Imagine the future consequences of this kind of AR dynamics. Eventually, when AR glasses are common, we will all be able to watch TV on very large screens anywhere. This is already possible with devices like HoloLens and Magic Leap One, and now we have done it with ARKit directly on our phones. Try experimenting now by taking things to the next level and including your own videos on 3D TV.

    If you need full code for this project, you can find it in my GitHub repo. I hope you had this tutorial on ARKit. If you have any comments or feedback, please leave them in the comments box. Happy coding!

    Do not Miss : * How to add 2D images, like a painting or photo, on a wall in augmented reality *

    Cover screenshots and screenshots of Ambuj Punn / Next Reality

    Source link