Blue Origin is now the first company to launch a rocket above the Karman line, land it safely, then relaunch it a second time. According to this blog post on Blue Origin's web site, the New Shepard booster that was used in the November 2015 launch was reused for a new launch on January 22, 2016. The blog post has some interesting information about the reuse and some of the adjustments they have made to the return and landing algorithms.
The very same New Shepard booster that flew above the Karman line and then landed vertically at its launch site last November has now flown and landed again, demonstrating reuse. This time, New Shepard reached an apogee of 333,582 feet (101.7 kilometers) before both capsule and booster gently returned to Earth for recovery and reuse.
Data from the November mission matched our preflight predictions closely, which made preparations for today's re-flight relatively straightforward. The team replaced the crew capsule parachutes, replaced the pyro igniters, conducted functional and avionics checkouts, and made several software improvements, including a noteworthy one. Rather than the vehicle translating to land at the exact center of the pad, it now initially targets the center, but then sets down at a position of convenience on the pad, prioritizing vehicle attitude ahead of precise lateral positioning. It's like a pilot lining up a plane with the centerline of the runway. If the plane is a few feet off center as you get close, you don't swerve at the last minute to ensure hitting the exact mid-point. You just land a few feet left or right of the centerline. Our Monte Carlo sims of New Shepard landings show this new strategy increases margins, improving the vehicle's ability to reject disturbances created by low-altitude winds.
(Score: 0) by Anonymous Coward on Monday January 25 2016, @01:34PM
Can please someone explain me what exactly are Monte Carlo simulations? I did read the wikipedia articles but I still don't understand what they are and how could I run one on a software.
I'm a computer engineer but never stumbled on that topic during the university. I probably need to look at it from a different perspective (maye a couple examples in code) to understand it.
thank you in advance.
(Score: 1, Informative) by Anonymous Coward on Monday January 25 2016, @03:01PM
It's just a fancy term for random sampling.
The idea of a simulation like this is that the error will tend to converge to 0 as you increase the number of samples.
(Score: 3, Informative) by ThePhilips on Monday January 25 2016, @03:34PM
Can please someone explain me what exactly are Monte Carlo simulations?
If my memory serves me right: in simulation, instead of going through all the possible combinations of the input values (e.g. with some fixed step), you go through combinations of random input values. The more you run the simulation, more precise the results. Also, the fixed step/etc could miss completely a fluctuation, if it coincides with the step size. With random distribution of inputs you have a better chance of catching such irregularities.
P.S. Check the annealing methods [wikipedia.org] too. I haven't used MC, but did implement annealing couple of times in the past. It is based on MC, and is less generic: it is used for finding the minimum/maximum with high probability (low prob of getting stuck in a local minimum/maximum), yet with acceptable computational time.