Turning the big knob on the front rotates the globe so you can set the position. I would assume that they set the position once they are in orbit and periodically make sure it still accurate.
> The cosmonauts configured the Globus by turning knobs to set the spacecraft's initial position and orbital period. From there, the Globus electromechanically tracked the orbit. Unlike the Apollo Guidance Computer, the Globus did not receive navigational information from an inertial measurement unit (IMU) or other sources, so it did not know the spacecraft's real position. It was purely a display of the predicted position.
I’d imagine they could error correct via ground triangulation of their signals periodically. Or looking out the window for a large distinctive landmark.
No questions but thanks to all of you guys. That's one heck fascinating stuff you are working on there and big kudos for researching and documenting the tech history!
I remember seeing this Globus device in all Russian spaceflight photos since I was a kid and we learned about Gagarin and others (I am from ex-communist country) - and nobody has ever published much information about how this thing works, until now. Yet it was a staple of Soviet spaceflight until early 2000s and similar devices flew also on all Soviet space stations (Salyuts, Mir).
This is a naive question, but how does one even begin to conceive of a device like this? What does a "mechanical computer" development life cycle look like?
Just like with electronics, it all started with individual components. It only looks so incredibly complex when all those individual units are composed into a big whole.
Underneath it there is order similar to the layering of software though.
Here are a few mechanical computing components explained, the context here was real-time in- and output to control ship guns against ground, sea or air targets, taking into account own speed, angles, angular speeds, distances and their changes, and other factors, like wind, if they could be measured.
Exactly, we think in terms of interfaces. The apollo moon mission technical history is a good example. At a high level there is a big diagram with things like transmissions and modules drawn as single blocks. We define how those blocks need to interact with eachother. Then teams of engineers work on each block, designing and making each block to do what it was defined to do on that big flow chart. Each part is tested to its interfaces (which can be mechanical, electrical, hydraulic, control, etc) and if each part does what it's supposed to the whole thing should.
I think it's basically a process of figuring out the equations you want, then how to "mechanize" them with gears, cams, and differentials. Then it is mechanical engineering to fit all the gears in a box. (The equations are generally much simpler than you'd implement in software, and you're not dealing with algorithms.)
Another thing is that the Globus went through multiple revisions so the original didn't have as much functionality. So it's kind of like any project where you make the minimum viable product and then incrementally add features.
Not so different from software I would hazard a guess. Document requirements (accuracy over intended usage period, human interface, weight, size) and identify potential subsystems and interfaces. Obtain industrial base with both practical and theoretical expertise in mechanical engineering. Brainstorm possible solutions and compare to requirements. Prototype and iterate. Test. Train.