Hi All,

I’m creating a card game experience with varying potential number of players. Those players are spaced out equally around a polygonal table with one side per team and the lengths of the sides equal to the number of players in the largest team.

The actual laying out of the board is done and works fine but I then want to automatically position the camera so you can see the entire table - so the more players in the game the higher the camera will move (This could also be modified by things like screen aspect ratio etc which would mean you needed to be higher or lower). Its tempting to just give manual control of the camera height (within a sanity checked range) but it would be nicer to at least get the initial position right. Trying to solve for the required height deterministically though is going to involve some pretty headache-inducing math so an iterative approach would probably be simpler.

At the start of the game the camera already starts at the center of the table and moves out to the correct position over a second or so, also if you move viewpoint the camera flows across.

Given this I was wondering if I could achieve this as simply as checking the cull information on the base for each player’s boards. If any of the boards are being culled then continue zooming further away. That doesn’t work entirely though as I’d need the case where a board is intersecting the edge of the view to not count as inside whereas for culling the reverse is true.

So the question then becomes whether there is an easy way to access this culling information or would I be better off just casting my own planes and checking intersection? The table itself is static so I could create a simple polygon to represent it (or even use a circle once we get beyond N teams) which would probably be faster than checking against each of the player’s playing areas individually.

So I guess it boils down to:

Given an arbitrary regular polygon at y=0, and a camera positioned at X=x and Z=y how do I best determine the Y height and look-at target vector that I should use as the camera target to fit the entire polygon on screen filling as much of the screen as possible.

Any suggestions welcome as I’ve got a few ideas but no idea what would be the best approach!

Thanks,

Z

As I understood it, it was a top down card game right?

And you want the camera to be as close as possible without cutting any player out of the screen?

To solve this, I would loop through the players, finding the position of the one closest to the right, left, up and down.

Then I would four points to be included, and using simple math. Calculating the height given the length and angle (angle you get from the camera fustrum) is simple trigonometry. But before calculating this height, find the center point in the middle of right, left, up and down and from there apply the height.

Maybe I missunderstood the problem, but if I understood correct then this is how I would solve it…

@Addez, thanks for the reply.

Essentially you are right, the complication is that it is not completely top down though. Your view would be from above your place on the table, as though you were playing the game and standing over a table in real life (but with buttons to switch viewpoint around to other player’s places and/or vertically above the table).

The “4 corners” thing was one I considered too, essentially it’s the maths solution I mentioned in my first post. It gets complicated though when not doing a directly top-down view because the angle keeps changing as you increase the height. It’s not as simple as taking the camera angle and doing basic trig to give the height - you need to solve for the height and angle that satisfies all 4 corner locations.

So given corner 1 → 4 you would then work out a range of camera positions and angles that satisfies each corner - and then you would need to solve them together to find the range that satisfies all - and then pick the “best” from those.

Maybe that still is going to be the best approach though…

I think the best idea is to make this a one variable ekvation…

If we don’t know the angle nor the distance then were jugeling two variables at once.

If we choose to look from your player head for example.

Then you can do like this:

- Decide the rotation around y axis

-Take the position of all players, make sure they all are at same height (same y position)

-Find center point of all players and make the camera lookAt it.

-Then do for all players:

camera.getDirection().anglesFrom(player.getPosition());

Find the maximum and minimum (which should be negative) angle and save their position.

-Take the middle point of those two extreme positions and Save that.

- Decide rotation around camera.getLeft() axis

Now all we have to do it find appropriate height.

We want to have the players heads and their cards in the picture.

-Look at a players card position, any player will do.

-for all players:

camera.getDirection().anglesFrom(player.getCardPosition());

-Find the one with the shortest angle (or moste negative angle) and save it’s position

-for all players:

camera.getDirection().anglesFrom(player.getHeadPosition());

-Find the one with the biggest angle and save it’s position

Find the middle y value of these two and set the middlepoint from step 1’s y value to be this.

Then look at the middle point and you should get the camera to be moste centerd.

This is maybe more complicated than it needs to be but it’s a start.

Maybe there is no ekvation to this problem…

Imagien if theres 3 players incl yourself. They are both sitting on either side of you on the same side of the rectangle table, where do you look?

My ekvation would leed to the player looking straight forward, which isn’t ideal.

Ohwell, hope I didn’t bore you with lots of text, I just gets excited about such mathematical problems

To be clear you don’t see a players avatar, just the part of the table they play their cards in. (It doesn’t really change the nature of the problem though…). Players will always be spread out around the table as there are always at least 2 sides with at least 1 player per side. (Sides can have different numbers of players though).

The table is laid out around 0,0,0 so that will always be at the center. Unfortunately you can’t just do camera.lookAt that though as you end up with wasted space at the top of the screen. You need to do a lookAt at a point on the y=0 plane somewhere between your x,z co-ordinate and the 0,0,0 midpoint.

This is the reason its an awkward problem, you have more degrees of freedom than you have fixed points - so in fact you get a range of valid positions. A suitable rule would then need to be added at the end to find the “best” (or at least “not worst”) of those positions.

Essentially I want to be able to feed into the algorithm:

Bounding polygon for the playing table

Angle around the Y (i.e. what player position)

Minimum distance from table center

From that the algorithm needs to give me and x,y,z position for the camera and an x,y,z position for where the camera is looking at. Those two positions combine to have the player head as close to the table as possible and have the table fill the screen as much as possible.

So inputs: Polygon, Angle

Outputs: Distance from center, Height of camera. Distance from center of place to look at.

In fact its even worse than that - as perspective means that the co-ordinate that is furthest left in pure co-ordinate terms may not be the co-ordinate that’s furthest left once it’s transformed for the camera view. I think I’ve pretty much decided on using an iterative approach. I’ll fix the camera target’s x and z co-ordinates, start the look at target at the middle of the screen, then while the camera is moving I’ll keep checking the 4 planes against the co-ordinates.

If all are inside by less than X: exit.

If all are inside by too much: move down

If some are inside by too much but not others: move look at to adjust

If all are outside: move up

If some are in and some out: move look at to adjust

That should iterate pretty fast towards the correct point and if I do it right should even give a nice curving camera path.

Yes… But the fun part is using an algoritm right?

Could you give some test code that people could play around with?

Give a code that generates this table and puts out x amount of people at various spots and then let others (like me) fiddle around with the camera settings and maybe find some way?

It’s quite hard to imagen it all without any graphical support…

But otherwise it seems like you’ve solved it, just tought incase you really wanted an algoritm then I have some time today to play around with it. I’m leaving for holiday tomorrow and then I wont be able to help…

The fun part is what looks good for the players

Unfortunately the MatchState object has a lot of code in it and I’m not sure how easy it would be to break out the table creation stuff (which isn’t finished yet anyway). It should be possible but it might be fiddly.

I’ll see how it goes with the iterative approach and if that fails I might just put something together. It will still be there when you get home!

Thanks for the offer and enjoy your holiday.

@zarch said:

It's not as simple as taking the camera angle and doing basic trig to give the height - you need to solve for the height and angle that satisfies all 4 corner locations.

I haven't read the whole thread but actually the above statement isn't true. You just pick the corner that will have the greatest effect and do the trig on that.

Take each corner and make sure it's a camera position relative vector. (Vector pointing from eye to the point.)

Do a dot product with that and the left and up vectors for the camera. The maximum abs up dot and maximum abs left dot will give you the horizontal "edge" and vertical "edge".

And actually, those are the values that you'd need for the trig, also. Those two dot products are the eye-distance scaled sine of the vertical and horizontal half-fov.

Let me see if I can eye-ball the math for one point and one direction:

[java]

Vector3f dir = cam.getDirection();

Vector3f up = cam.getUp();

Vector3f loc = cam.getLocation();

Vector3f pos = // some point position

Vector3f relative = pos.subtract(loc);

float upDot = up.dot(relative);

float dist = dir.dot(relative);

// Figure out how much dist would have to scale to make upDot be the

// same as the camera height.

float scale = upDot / (cam.getHeight() * 0.5f);

// Calculate how far along the dir vector camera would have to move

float move = (dist * scale) - dist;

cam.setLocation( loc.subtract( dir.mult(move) ) );

[/java]

...or something like that.

@pspeed said:

I haven't read the whole thread but actually the above statement isn't true. You just pick the corner that will have the greatest effect and do the trig on that.

Take each corner and make sure it's a camera position relative vector. (Vector pointing from eye to the point.)

Do a dot product with that and the left and up vectors for the camera. The maximum abs up dot and maximum abs left dot will give you the horizontal "edge" and vertical "edge".

And actually, those are the values that you'd need for the trig, also. Those two dot products are the eye-distance scaled sine of the vertical and horizontal half-fov.

I'm not sure how I could avoid checking all the points as I'm trying to ensure that every point is on screen - but as close to the edge of the screen as possible while keeping all the other points on screen.

Lets see if I followed this:

Let me see if I can eye-ball the math for one point and one direction:

[java]

Vector3f dir = cam.getDirection();

Vector3f up = cam.getUp();

Vector3f loc = cam.getLocation();

Vector3f pos = // some point position

[/java]

Seems to make sense. cam.getUp() would always be (0,1,0) but no point risking that assumption being wrong.

[java]

Vector3f relative = pos.subtract(loc);

float upDot = up.dot(relative);

float dist = dir.dot(relative);

[/java]

So, the relative becomes the difference between the camera and the point.

upDot then becomes |A| * |B| * cos(Θ) Up is a unit vector but relative isn't, so we have both the length of relative and the angle between up and relative.

dist then becomes |A| * |B| * cos(Θ) Is direction a unit vector? If so we have both the length of relative (again) and the angle between the direction the camera is looking and relative.

[java]

// Figure out how much dist would have to scale to make upDot be the

// same as the camera height.

float scale = upDot / (cam.getHeight() * 0.5f);

[/java]

Camera height is the number of pixels on the screen here right? So scaling by 0.5f is just to get the difference between the angle and the center of the screen? :o

[java]

// Calculate how far along the dir vector camera would have to move

float move = (dist * scale) - dist;

[/java]

I get the idea of calculating a distance to then multiply into the direction but I've no idea how this line acheives that.

[java]

cam.setLocation( loc.subtract( dir.mult(move) ) );

[/java]

...or something like that.

This bit I followed, it moves the camera to the new position. :D

@zarch said:

I'm not sure how I could avoid checking all the points as I'm trying to ensure that every point is on screen - but as close to the edge of the screen as possible while keeping all the other points on screen.

Lets see if I followed this:

Seems to make sense. cam.getUp() would always be (0,1,0) but no point risking that assumption being wrong.

No. If cam.getUp() is correct then it will be something different if you look up or down.

@zarch said:

So, the relative becomes the difference between the camera and the point.

upDot then becomes |A| * |B| * cos(Θ) Up is a unit vector but relative isn't, so we have both the length of relative and the angle between up and relative.

Not the angle. The cosine. The dot projected the location onto the up vector and gives you the length. If you were to imagine a triangle from eye point that had dir as a base then relative would be the scaled hypotenuse and you just calculated the scaled other side.... which is why I calculate dist to know the unscaled side...

@zarch said:

dist then becomes |A| * |B| * cos(Θ) Is direction a unit vector? If so we have both the length of relative (again) and the angle between the direction the camera is looking and relative.

Not the angle... the cosine. See above. dist is then the length of the triangle side looking straight out from the eye.

@zarch said:

Camera height is the number of pixels on the screen here right? So scaling by 0.5f is just to get the difference between the angle and the center of the screen? :o

I think I might have flipped the divide by accident (I do this pretty often)...

Sort of. We know the height in pixels of pos because that's what upDot is. (See triangle above.) We want to figure out what we'd have to scale the above triangle by so that upDot is the height of the screen. Note: upDot is already positive or negative as needed.

I want to know how much I should multiply the sides of the triangle by to get upDot to be the same as half the screen height.

So if half screen height is 240 and upDot is 300... I need to shrink the triangle until upDot is 240... or I need to multiply by 240/300.

float scale = (cam.getHeight() * 0.5f) / upDot;

@zarch said:

I get the idea of calculating a distance to then multiply into the direction but I've no idea how this line acheives that.

So now that we know what "dist" should be to make upDot the same as half the screen height... we figure out the delta of that distance to figure out how far to move the camera.

float newDist = dist * scale; ...we scaled the triangle's base now.

float move = newDist - dist; ... the different between the current distance from pos and where we'd like to be.

@zarch said:

This bit I followed, it moves the camera to the new position. :D

Then project backwards from current position the amount of the difference.

In your early post, you said you started with 4 points and wanted to know if they are on screen. You only need to find the max(abs) of the up vector and left vector dots to calculate a distance to make horizontal fit and a distance to make vertical fit. Then just pick the best of those. Even if you have more than 4 points... you only need to accumulate the max(abs) if the up vector and left vector dot products first.

Thanks @pspeed, I’ll have a proper look through your post tomorrow and make sure I understand it.

I’ve just realised I may be massively over-complicating things though.

If I take my Vector3f[] tableCorners and run them through the cam.getScreenCoordinates that will handle all of the transformations for me.

I can then just scan through the transformed vectors and for each one pull out min x, max x, min y, max y (I’d already identified that I just need those 4 values, I need to transform all the vectors to have the values to find the min/maxes of though).

With that the algorithm becomes easy as I just look at min vs 0 and max vs screen width/height… and have say a 1% tolerance around the edge of the screen.

A dirty approach you could use (this is my way of doing things and it’s dirty alright)…

If you have the proper dimensions, like a model, put it in a scene, put a camera and move around until everything is exactly the way you want. Put some key bindings to dump the values you need and use that.

Unless your models changes those values should be right. Or close. From this you can adjust things. The models don’t have to be detailed or anything, just approximately right sized.

I’m a visual person so that helps me a lot. I need to visualize things. Badly.

The problem is the table is variable size. The engines supports matches which can have 2->unlimitted teams and 1->unlimitted members per team. (In practice the match rules limit things, so I wouldn’t expect to see ridiculous numbers, but this code does need to cope with all the possible variations as I don’t want it to break when I fiddle with the match rules sometime (and the match rules already support considerable flexibility in team sizes and numbers).

Thanks to the discussion here though I’ve got a plan I’m reasonably happy with. I’m going to have a proper look through pspeed’s stuff when I have time tomorrow and then see what I can cook up.

Yeah, my way won’t work that nicely with variable stuff though.

Wish I could help more but I’ll leave that to the experts.

Your way also falls down at variable screen aspect ratios. You might see the top or sides of the table disappearing off the edge of the screen.