The Insta360Pro 360 camera can record in 4k, 6k and 8k video formats for viewing in 360 environments.
I got curious if there’s a difference in output quality depending on which resolution I record at.
As it turns out, there is a difference. Read More
The Insta360Pro 360 camera can record in 4k, 6k and 8k video formats for viewing in 360 environments.
I was trying to figure out an ideal video resolution to record VR video in, for my HTC Vive VR headset. By ideal, I mean what video file resolution would take full advantage of the display’s resolution.
I am asking myself this question because I don’t think the current state of technology has a good answer and I want to know what recording resolution do I need to max out the amount of recorded detail.
It’s display resolution is 1080×1200 per eye, with a field of view of 110deg side to side and 100deg up and down.
That means that side to side, to cover 360 deg the ideal resolution should be 3534px (360/110*1080).
Up and down, to cover 180deg, the per eye resolution should be 2160 px (180/100*1200)
So the ideal 360deg file resolution should be 3534 x 2160px per eye. For stereo video, the vertical resolution needs to double, so it beomes 3534 x 4320. That’s very close to 6k resolution (4,992 x 3,744, which is 18MegaPixels)
Everyone that’s tried a VR headset in 2017 will attest to the the fact that the display resolution is rather poor. But even with this ‘poor’ display resolution we need 6k (18 MegaPixel) video to take full advantage of it. 6k video files are massive and working with them is not a simple task from a computational perspective. Only the very best/most powerful machines can handle smooth editing at these resolutions. 7680×4320
If we want to increase that display resolution to get a less pixelated image in the headset display, we are also going to have to increase the recorded video resolution. So even a modest 1.4x increase in headset display resolution on both axis (which results in a 2x total resolution increase) means that we need 10k video to use the display’s full potential. Going to 2x increase in display resolution on both axis (so a 4x increase in total resolution) means… I don’t think we even have resolutions that go up that high… 20k resolution?
I could not even speculate on what kind of internet connection and bandwidth you would need to stream that kind of video file from youtube.
All this being said, while a 4 times increase in resolution of the display is significant, it does not come close to full potential of what the eye can see and distinguish.
The current 4k video streams available on youtube do not contain enough information to take full advantage of the display resolution in today’s generation of headsets.
I haven’t yet received my 10GB NICs yet, but when I do I’ll need to make sure I can get max throughput to the array.
Some reading material: Reference
Just installed two 1TB drives in the UnRAID box, and wanted to set them up in a RAID0 configuration.
First step is to move the data off the current cache drive (250GB SSD). This involves moving the data to the HD array by setting the Share settings for “Use Cache” to “YES” from “Only”. Then click the “Move Now” button on the Main page. I had to do this for both the “appdata” and “domain” shares.
All that’s required is to select the two devices as cache devices, start the array. If there are multiple cache drives selected the system will automatically set them up as RAID1. This can be confirmed by clicking on the cache drive and looking in the “Balance status” section. Data, System and Metadata will all show RAID1.
Because btrfs is very clever, the cache drive RAID array can now be converted to something else, which is a RAID0 array in my case.
In the “Balance” field enter “-dconvert=raid0 -mconvert=raid1” and click the “Balance” button.
This will convert the array.
The Balance Status will reflect this change, with Data now being RAID0 and the storage about is increased.
To move the data back to the SSD cache, set the Share settings for the user shares for “Use Cache” to “Prefer” and then click the “Move Now” button. This will move the data back to the SSD cache and off the HD array.
This took a while to figure out as I had to try several combinations of settings to get it running.
When setting up the VM for FreeNAS, use “FreeBSD” as the template.
For BIOS use “OVMF” and Machine should be “i440fx-2.7”
When installing FreeNAS make sure it is set to UEFI boot.
In order to PASS THROUGH a generic PCIe device in UnRAID, (a SATA controller in my case), use this guide.
When the camera is on and recording while being attached to an external battery bank, the camera’s battery does not charge. In fact the battery level decreases. It decreases very slowly, but still decreases. The power consumption of the camera is higher then what the external bank can dump into the camera.
The documentation mentions the internal battery life is about 2 hours, and the charging time to full capacity is about 3 hours. So it makes sense that the external charger (even thought it’s a 2.4A charger) can’t deliver enough power to run the camera, let alone run the camera and charge the battery.
This means that the camera can’t be used for an unlimited amount of time, even if plugged into the wall. I don’t know how long the camera can stay on in this mode, but it’s substantially more then just using the onboard battery.
Also, it should be noted that if you want maximum battery life out of the camera, it’s best to plug it in right at the start of the operation of the camera, when the internal battery is fully charged. If you wait until the internal battery is depleted by any amount, an external battery bank or wall plug will be of less use then if the internal battery was full.
Contemplating what strategy I should use for the funds I am earning from mining. Currently I’ve been converting the BTC earned into $$ to pay off the debt incurred by buying GPUs for mining. But is that best strategy?
One of two things will happen: mining profitability will gradually fall to values where electricity cost overtakes profits, or mining profitability will continue indefinitely (or at least for the long term).
If all BTC earnings are kept as BTC, and the price tanks, then all earnings are worthless.
If all BTC earnings are converted to $$ right away, then you are not capturing on the increase of BTC in value. We’re already quite high in value. In the short term I don’t think the BTC value will sky-rocket to double what it is today. That is very unlikely. Perhaps in the long run, but likely not in months.
As with any business, priority one is to pay off the initial investment. In my case since the GPUs still have value, I’ll place a value of 40-50% on them on the used market, since if it comes to selling them off, the market will be flooded with GPUs.
In the short term converting the BTC to $$ will protect against large losses. It’s much easier to imagine the BTC price fall dramatically (by 5-10 times) then doubling again in the next few months. A price doubling would bring a gain of 2 times. If the price drops by 5-10 times, then that’s a much harder hit to my $$ wallet.
My strategy for the short term is to convert all mining earnings to $$ at the highest exchange possible, to protect myself against the potential loss.
I just read a very enlightening thing today. Actually this is the second time I’m going through Alan Watt’s book “The book on the taboo against knowing who you are”.
The concept of ‘being present’ and ‘living in the moment’ has been preached by many schools of thought and I’ve been exposed to them for many years now. Another way of looking at the living in the moment idea is that one should limit expectations of future events.
However no explanation has been totally satisfactory to me as to convince me through and through to limit how much I over-think the future. Nothing I’ve come across have had a lasting impact.
Alan makes an interesting point in his book regarding how in order to understand any one thing (be it organism or inanimate matter) one not only needs to understand the item itself, but also the environment in which it exists. To only understand the items is only seeing half the picture.
I spend a lot of time in my head. I know that, which is why I’ve been working for so many years to be more present, and manage expectations. But as you know, this is easier said then done.
The connection I made this morning was that it’s pointless to think about how certain future events may unfold, and/or have expectation of how things will turn out. Reason being that even though we may understand (or think we understand based on prior past events) how a certain item functions or how a person behaves, the context of the moment (the environment and everything else that’s taking place in that time) will shape the way the item or person behaves, in unimaginable ways.
This realization is quite comforting to me. It’s comforting because it gives me a reason to not over-think the future
I had some trouble connecting my Android phone to the new Vuze Camera from HumanEyes. I contacted their support department, and they offered some help, but to follow their instructions to connect I would have to reset the WiFi settings every time.
The problem I was having is that even though I would connect my device to the WiFi network that the Vuze Camera created, the software would not be able to connect to display the live feed and access the settings.
After playing around some more with the camera, I figured out what’s going on.
The software on your device connects through WiFi to the camera. The software likely tries to connect to some preset IP address of the camera. If the device you have doesn’t have a cellular connection, this method of connection works every time with no hassles.
However if you’re connecting from a phone that already has a cellular network connection, things change. When my phone connects to the Vuze network, it informs me that the Vuze network has no internet access, and proceeds to use the cellular network for continued internet access. Because of this, one of two things is likely happening:
1. The IP address the Vuze software on the phone is trying to access likely gets directed out to the internet… making the Vuze inaccessible to the phone.
2. The subnet that the cellular network DHCP service is supplying the phone makes the Vuse camera IP address not addressable from the phone’s IP address.
So in order to always be able to connect to the Vuze camera from my phone, I have to:
1. Put the phone in Airplane Mode
2. Enable ONLY Wi-Fi on the phone, and connect to the Vuze network.
Vuze software will now reliably connect to the camera every time. I would like to still have cell network connection while connected to the Vuze camera, but I don’t know if this is something that can be fixed in software.
Spherical video is tricky. The process of stitching together multiple images to create a equirectangular projection image doesn’t give you an exact image resolution. So what resolution should I be outputing my video to? Too low a resolution and you’re loosing detail. Too high a rendered resolution and you’re wasting bandwidth as no more details are created when an images is scaled up.
In using the new Vuze Camera from HumanEyes, I wanted to figure out what the ideal rendered output should be to maximize image quality and minimize file size waste. It’s easy to output to 4096 x 4096 px and be done with it, but that resolution of is not easily playable by most devices today, and it may be a waste of bandwidth.
So the goal is to get a ballpark idea of what the final rendered resolution should be based on the data recorded by each sensor, in order to retain as much detail as possible from the raw footage to the youtube file as possible, in order so the viewing experience is as sharp as possible when viewed in 3D 360º, without wasting extra file size.
The resolution I came up with is 3200 x 2880 pixels. Read on to find out how I came to that conclusion.