提问者:小点点

Kinect


我目前正在使用XBOX KINECT型号1414,并处理2.2.1。我希望使用右手作为鼠标来引导角色通过屏幕。

我设法在kinect骨架上画了一个椭圆来跟随右手关节。我如何能够确定该关节的位置,以便在需要时替换鼠标X和鼠标Y?

以下是跟踪您的右手并在其上绘制红色椭圆的代码:

import SimpleOpenNI.*;

SimpleOpenNI  kinect;

void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation 
kinect.enableDepth();

// enable skeleton generation for all joints
kinect.enableUser();

smooth(); 
noStroke(); 

// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());

}



void draw()
{

// update the camera...must do
kinect.update();

// draw depth image...optional
image(kinect.depthImage(), 0, 0); 

background(0);


// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{   
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;

// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);

PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}

///////////////////////////////////////////////////////

void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);

float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}

//////////////////////////// Event-based Methods

void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");

curContext.startTrackingSkeleton(userId);
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}

任何形式的链接或帮助将非常感谢,谢谢!


共2个答案

匿名用户

在你的情况下,我建议你使用右手关节的坐标。这是你得到它们的方法:

foreach (Skeleton skeleton in skeletons) {
    Joint RightHand = skeleton.Joints[JointType.HandRight];

    double rightX = RightHand.Position.X;
    double rightY = RightHand.Position.Y;
    double rightZ = RightHand.Position.Z;
}

请注意,我们正在查看3个维度,因此您将拥有x、y和z坐标。

FYI:您必须在事件处理程序SkeletonFramesReady中插入这些代码行。如果您仍然想要它周围的圆圈,请查看KinectSDK中的钢架雪车-Basics WPF示例。
这对您有帮助吗?

匿名用户

目前还不清楚您要实现的目标。如果您只是需要2D屏幕坐标中手的位置,您发布的代码已经包含以下内容:

  1. kinect. getJointPositionSkeleton()检索3D坐标
  2. kinect. transtRealWorldToProject()将它们转换为2D屏幕坐标。

如果您希望能够在使用kinect跟踪的手坐标和鼠标坐标之间进行交换,您可以将2D转换中使用的PVector存储为整个草图可见的变量,如果正在跟踪,则通过kinect骨架更新,否则通过鼠标更新:

import SimpleOpenNI.*;

SimpleOpenNI  kinect;

PVector user1RightHandPos = new PVector();
float ellipseSize;

void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation 
kinect.enableDepth();

// enable skeleton generation for all joints
kinect.enableUser();

smooth(); 
noStroke(); 

// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());

}



void draw()
{

    // update the camera...must do
    kinect.update();

    // draw depth image...optional
    image(kinect.depthImage(), 0, 0); 

    background(0);


    // check if the skeleton is being tracked for user 1 (the first user that detected)
    if (kinect.isTrackingSkeleton(1))
    {   
        updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
        ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
    }else{//if the skeleton isn't tracked, use the mouse
        user1RightHandPos.set(mouseX,mouseY,0);
        ellipseSize = 20;
    }

    //draw ellipse regardless of the skeleton tracking or mouse mode 
    fill(255, 0, 0);

    ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}

///////////////////////////////////////////////////////

void updateRightHand2DCoords(int userID, int jointId) {
    // make a vector to store the left hand
    PVector jointPosition = new PVector();
    // put the position of the left hand into that vector
    kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
    // convert the detected hand position to "projective" coordinates that will match the depth image
    user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
    kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}

//////////////////////////// Event-based Methods

void onNewUser(SimpleOpenNI curContext, int userId)
{
    println("onNewUser - userId: " + userId);
    println("\tstart tracking skeleton");

    curContext.startTrackingSkeleton(userId);
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
    println("onLostUser - userId: " + userId);
}

或者,您可以在测试时使用布尔值在鼠标/kinect模式之间交换。

如果您只需要鼠标坐标来测试,而不必一直从kinect进入,我建议您查看RecorderPlay示例(通过处理

kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni"); 

需要记住的一点是:深度存储在分辨率的一半(因此坐标需要加倍才能与实时版本持平)。