Watch the video.
https://www.youtube.com/watch?v=ekgiScr364Y
Then read the text, alright?
"Prenatal care. These are the kinds of services folks depend on Planned Parenthood for.”
- Planned Parenthood CEO Cecile Richards
Planned Parenthood is spending millions of dollars in advertising right now, saying they support “choices” for pregnant women, but nothing could be further from the truth…
Despite Planned Parenthood’s claims, Live Action’s investigative team found that prenatal care is virtually non-existent for mothers who actually want to keep their babies. We documented it in our NEW investigative video, which you can see HERE:
https://www.youtube.com/watch?v=ekgiScr364Y
Our investigators contacted all 41 Planned Parenthood affiliates in the United States, reaching out to 97 facilities, and discovered only FIVE offered any sort of prenatal care at all.
By turning away pregnant women for prenatal care, it’s obvious Planned Parenthood has one priority - and supports only one option - for most women: abortion.
But that doesn’t stop Planned Parenthood from lying to the public about its prenatal services. In fact, this is all part of Planned Parenthood’s strategy to protect its $550 million in taxpayer funding -- by downplaying the 887 preborn children they dismember, poison, and starve to death every day.
With the fight to defund Planned Parenthood in full force in Congress, we need to act quickly and share this information with more Americans so that, they too, know the truth: Planned Parenthood is not a “health care provider,” they are an abortion corporation.
Please share this video with your friends on Facebook: https://www.facebook.com/liveaction/videos/10154911641473728/.
And for your friends who aren’t on Facebook, email them this link.
In the following weeks, Live Action will be releasing more videos exposing Planned Parenthood's relentless focus on abortion and the lack of authentic health care. Live Action’s groundbreaking investigative report will shatter the narrative and the myths Planned Parenthood so desperately want the American people to believe.
Now is the time to deal a crippling blow to the abortion giant - the lives of preborn children are depending on us. Together, we can put an end to Planned Parenthood’s lies and the state-sponsored killing of children.
http://www.foxnews.com/politics/2017/01/20/text-president-trumps-obamacare-executive-order.html
Text of President Trump's ObamaCare executive order
MINIMIZING THE ECONOMIC BURDEN OF THE PATIENT PROTECTION AND AFFORDABLE CARE ACT PENDING REPEAL
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:
Section 1. It is the policy of my Administration to seek the prompt repeal of the Patient Protection and Affordable Care Act (Public Law 111-148), as amended (the "Act"). In the meantime, pending such repeal, it is imperative for the executive branch to ensure that the law is being efficiently implemented, take all actions consistent with law to minimize the unwarranted economic and regulatory burdens of the Act, and prepare to afford the States more flexibility and control to create a more free and open healthcare market.
Sec. 2. To the maximum extent permitted by law, the Secretary of Health and Human Services (Secretary) and the heads of all other executive departments and agencies (agencies) with authorities and responsibilities under the Act shall exercise all authority and discretion available to them to waive, defer, grant exemptions from, or delay the implementation of any provision or requirement of the Act that would impose a fiscal burden on any State or a cost, fee, tax, penalty, or regulatory burden on individuals, families, healthcare providers, health insurers, patients, recipients of healthcare services, purchasers of health insurance, or makers of medical devices, products, or medications.
Sec. 3. To the maximum extent permitted by law, the Secretary and the heads of all other executive departments and agencies with authorities and responsibilities under the Act, shall exercise all authority and discretion available to them to provide greater flexibility to States and cooperate with them in implementing healthcare programs.
Sec. 4. To the maximum extent permitted by law, the head of each department or agency with responsibilities relating to healthcare or health insurance shall encourage the development of a free and open market in interstate commerce for the offering of healthcare services and health insurance, with the goal of achieving and preserving maximum options for patients and consumers.
Sec. 5. To the extent that carrying out the directives in this order would require revision of regulations issued through notice-and-comment rulemaking, the heads of agencies shall comply with the Administrative Procedure Act and other applicable statutes in considering or promulgating such regulatory revisions.
Sec. 6. (a) Nothing in this order shall be construed to impair or otherwise affect:
(i) the authority granted by law to an executive department or agency, or the head thereof; or
(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.
(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.
(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
DONALD J. TRUMP
THE WHITE HOUSE,
January 20, 2017.
(This was written last year but never posted)
I spent a hundred bucks on my next book last week.
Each story had an illustration at the beginning, except one: “Watch Your Language, Young Man!” I could find no suitable old women on Google Images, so I figured I’d have to either find an old woman at a bar who would want to be the illustration of a shrewish old lady, or just get out my pencil and make one.
Rust never sleeps! And boy, but my fingers seemed to be solid rust. Of course, when I was young I drew every day, or at least almost every day. I was damned good.
Not any more. I haven’t drawn a single thing since my kids were born three decades ago. So of course when I sat down with pencil and paper, nothing was produced but offal.
Damn. It was late and I’d had a few beers, so maybe I was drunk? I set it aside for the next morning.
Several days and a couple sheets of paper later and I finally had a cartoon drawing of an angry old crone. I figured I’d digitize her the same way I digitized my slides—I’d use my phone’s camera. With an eight by ten image to photograph, it should work fine. After all, the cover of The Paxil Diaries is a photo of one of my paintings I painted when I still had talent, and it turned out all right.
Not Mrs. Ferguson. The white paper was a neutral gray in the digital image. “GIMP’ll fix it,” I thought.
Nope. Adjusting the brightness and contrast removed some of the details. Actually, a lot of them.
Several tries later I gave up, and decided to just scan it. I went down to the basement, where the scanner’s been since I moved in here, and realized that first, it probably wouldn’t work any more, and even if it did it used a parallel port to get the image in a computer, and when was the last time you saw a parallel port? So I drove to Staples, where all the scanners were attached to printers!
I finally found a sales guy, who found a couple without printers that cost more than the ones with printers attached. He said they always put printers on cheap scanners, so I bought one of the expensive ones, an Epson Perfection V39.
I took it home and scanned Mrs. Ferguson, put her at the top of the story, printed her out, and shrunk down like that, again a lot of the details were gone. So I thickened some lines and rescanned. It’s fine now.
I wasn’t going to mention it because when I bought the scanner I had the idea of scanning all the photo albums for Patty, but that’s taking a long time, they won’t be done by Christmas, and Leila says she can’t come this year, anyway.
I have one scanned, and half its photos straightened out and separated from each other, but I’ll be at it for a while. I’m also going to scan the book my uncle co-write, and if I get permission from my aunt to publish it I’ll do so. Of course, it would only be of interest to family since it’s about family history, some of it ancient, fifteenth century ancient.
I really like that scanner! It’s a lot smaller than the old one in the basement; that one’s four or five inches thick and a foot and a half by two feet, and has a power cord with a big box in the middle and a parallel port. The new one is smaller than my big laptop and needs no power cable, as it gets its power from the USB port. It uses the same kind of USB cable as your phone (unless you have an Apple, which is compatible with nothing).
At any rate, I haven’t written much lately...
If you listen to the pundits who support Planned Parenthood, the sky will surely fall if the abortion chain is defunded.
If even one Planned Parenthood affiliate or center has to close as a result of defunding, they say, the patients Planned Parenthood serves will have no access to health services elsewhere. This is utter nonsense, of course.
The claim that removing federal dollars from Planned Parenthood will shutter their doors is ludicrous. As Live Action News has previously reported, the organization’s own annual reports reveal that Planned Parenthood has been netting a profit for many years. Almost every year since 2000, Planned Parenthood’s revenue has exceeded their expenses — not just by a few dollars, but by tens of millions of dollars (yearly surpluses ranging from $18.5 million to a high of $127 million). In addition, with the threat of defunding now more real under the newly elected Congress and president, Planned Parenthood has repeatedly claimed that private donations are suddenly flooding into their coffers.
For the sake of argument, let’s imagine what would happen if we applied this same logic — that a profitable organization should be taxpayer funded, merely because closing would disenfranchise its customers — to any other business. Let’s suppose it was thought that department store chains should receive taxpayer funding because online sales are hurting chain stores’ business. The argument could be made that these department stores have served many people, that they are located in many disadvantaged communities, and that poor people who do not have internet access will be disenfranchised if these stores close. Should we then give these stores half a billion taxpayer dollars every year (the amount Planned Parenthood receives) to keep them open?
The truth is that there are Federally Qualified Health Centers (FQHC) already in place, which could serve the patients Planned Parenthood serves — outnumbering Planned Parenthood centers 20 to 1 — so why do Planned Parenthood spokespersons (many who earn six-figure salaries) want you to believe that American women could not survive without them?
Planned Parenthood president Cecile Richards has even made totally unsupported claims that millions would be without healthcare if Congress votes to defund Planned Parenthood. Richards recently told Rolling Stone:
This is literally whether a young man in Texas can come to us for an STI testing, or whether a woman who has a lump in her breast can come to us in Ohio to have a breast exam or be referred for screenings, or whether a college student or a young person anywhere in the country can come to us for family planning. We’re talking about more than a million-and-a-half people who rely on Planned Parenthood, and for most of them we’re their only medical provider. As all of the medical institutions have said: There’s no one to take our place providing low- and moderate-income people with preventive health care. There isn’t any other entity that is doing that work.
Interesting that she mentions the breast screenings, because Planned Parenthood, as Live Action has documented, does not do mammograms — but FQHCs do.
I am curious, however, as to how the defunding of Planned Parenthood would cause the apocalypse, but closures of other non-profits — specifically hospitals, which one could argue offer far more needed “services” — would not.
Let me explain.
According to a 2015 report published by the Journal for Health Affairs, patient health was not significantly compromised when hospitals closed. The Non-Profit Quarterly reports pointed out with regards to the study that “vulnerable hospitals that have not been financially sustainable, with operating margins of ‑20% on average, have been the first to close, causing public concern that displaced patients will experience declining health and even death when access to care goes away.”
Despite this concern, the 2015 study found “no significant difference between the change in annual mortality rates for patients living in hospital service areas (HSAs) that experienced one or more closures and the change in rates in matched HSAs without a closure…. Nor was there a significant difference in the change in all-cause mortality rates following hospitalization….”
The unknown in the study was how the closures affected low income patients. But according to Non-Profit Quarterly:
Researchers reported that among Medicare patients there were no substantial changes in admissions, lengths of stay, or readmissions, but also cautioned that the study should not be interpreted to mean that every hospital loss is harmless….
While the study supports the argument that access to care has improved, the data does not, however, tell the whole story. One-third of institutions that were closed were “safety net” hospitals that treated large numbers of low-income and uninsured people. Since only easily-accessed Medicare patient information was reviewed, impact on those populations is still unknown.
Unknown? A study of three hospital closures from 2015, conducted by the Kaiser Commission on Medicaid and the Uninsured and the Urban Institute, actually found that lower income and elderly patients were negatively affected and “were more likely to face transportation challenges and thus more likely to delay or forgo needed care.”
But in Planned Parenthood’s case, there are already hundreds of FQHC alternatives available, open and ready to serve the public. A December 2015 Congressional Research Service report which compared the services of Planned Parenthood Federation of America-affiliated health centers (PPAHC) to those of Federally Qualified Health Centers (FQHC) found…
FQHCs are required to provide primary, preventive, and emergency health services.
FQHCs focus on providing more comprehensive primary care, dental, and behavioral health services.
FQHCs provide far more services in a given year than do PPAHCs.
PPAHCs focus their services on individuals of reproductive age; FQHCs provide services to individuals throughout their lifetimes.
FQHCs served 22.9 million people in 2014; PPAHCs served 2.7 million.
358 counties have both a PPAHC and a FQHC.
FQHCs also receive federal grants that require them to provide family planning (among other services) to Medicaid beneficiaries.Planned Parenthood and its supporters want the public to believe that only Planned Parenthood is able to care for the needs of the 2.5 million patients they “serve.” And they will suggest that if they are defunded and close facilities, the hundreds of FQHC that replace them (already in existence and serving patients, mind you) will be overwhelmed with patient influx, thus unable to address the many needs. (This was the same fear that plagued Democrats when they passed the Affordable Care Act, yet they argued that the system would be more than able to handle that influx.)
A study on the effects of the Affordable Care Act, conducted by the Robert Wood Johnson Foundation and health care company Athenahealth, which gathered data from 15,700 of Athenahealth’s clients, found that new patient visits to primary care physicians only increased slightly. It was anticipated that uninsured patients now gaining insurance might have unmet medical needs, and their demand for services might overwhelm the capacity of primary care doctors. But according to the study, this idea proved false. Kathy Hempstead, director of the Robert Wood Johnson Foundation, told USA Today that the study “suggests that, even though there’s been a big increase in coverage, it’s a relatively small part of the market and the delivery system is able to handle the demand.”
For years, Planned Parenthood has been closing centers despite a steady increase in funding under the Obama administration. The Congressional Research Service found that the number of PPHAC affiliates and facilities has declined since 2009-2010, when PPFA reported having 88 affiliates (a 32 percent decline) and 840 health centers (a 21 percent decline). And, as of December 20, 2016, there are now only 650 Planned Parenthood centers, indicating a 22.67 percent decline.
In addition, Planned Parenthood patients have also decreased over the years. In 2014, Planned Parenthood saw 2.5 million patients — down a whopping 24.24 percent since 1996, when they saw 3.3 million and received far less government funding ($177.5 million in 1996 compared to $553.7 million in 2014). In contrast, FQHCs have increased the number of patients seen in each year since 2009. From 2009 to 2014, FQHC patients increased from 18.9 million to 22.9 million.
Planned Parenthood is the largest provider of abortion in the nation. Live Action has documented how Planned Parenthood manipulates its own data to cover up the fact that abortion – not women’s health care – accounts for the lion’s share of the corporation’s services for pregnant women.
Defunding the largest chain of abortion clinics will not send millions of patients to their demise — and Planned Parenthood knows this. The truth is that taxpayer dollars can be better spent on real health care organizations that will serve the American public and maintain the sanctity of life in the process.
http://liveactionnews.org/sky-fall-planned-parenthood-defunded-heres-why/
Every year between Christmas and New Year, there is a geek gathering in Germany organized by the Chaos Communications Club. Talks are available on their website and I've taken the opportunity to trawl through the archive. One of the more accessible projects is an ongoing attempt to make a cooking machine which combines weighing scales, slicer, rice cooker, pressure cooker and sous-vide cooker. That's a sensible idea. However, the implementation is barking mad. Version 1 and version 2 are described in a 54 minute talk from 2012. Version 3 and version 4 are described in a 39 minute talk from 2014. I prefer version 2, especially given that a potential investor requested the second prototype then said something akin to "I wished you hadn't shown me this."
I have extreme misgivings about the placement of a Raspberry Pi - a credit card computer with no ECC RAM or other safeguards - in the proximity of heat and magnetic fields. This isn't just a means of bootstrapping. The entire design philosophy is about dangerous kludging and shows neither expertise in hardware or software. Another blown 1.2kV transistor? That was an impressive noise. Control software written in PHP. XML configuration deprecated in favour of MySQL Server. Thankfully, they're not completely mad and strongly discourage (intentional) remote control of the machine. Also, there is consideration for an emergency stop button, as defined in ISO 13850. However, if they want to rank recipes by use then the control systems (running systemd) will be communicating on the public Internet. And I don't have any faith that it'll be achieved securely and competently.
"A watched pot never boils"? Perhaps that'll become "A watched IoT pot never explodes."
Found a story about my... well not my home town but the town you have to go to from my home town for anything besides gas, beer, or religion. Turns out Nick Cage's rental car broke down there and he had a thing or two to say about the place. See, that's what I mean when I say to folks who only see what my views on politics and other big shat are, you don't know me at all.
This kind of shit is just another day in a red state. If someone comes up and says you owe them something that you don't, you laugh and punch them in the face but if you see someone in actual need, you help your fellow man because it's the right thing to do and because you might need a hand too some time. In a place where most everybody grows up poor and having to work their ass off to get by, you help each other because it's just what you do. Nick could have broke down a half mile from where he did over by the meth dealers and he still would have gotten the same reception.
Compiler flags have become horribly sub-optimal and can be greatly improved for small computers and virtual hosting. The lazy case:-
gcc -O3 foo.cc
should be strictly avoided. Assuming this is run on a computer with 1GB RAM, this is functionally equivalent to setting some of the flags to:-
gcc --param ggc-min-heapsize=131072 --param ggc-min-expand=100 -fno-strict-overflow -fsigned-zeros -fsignaling-nans -ftrapping-math -fdefer-pop -fno-omit-frame-pointer -fno-stack-protector -fearly-inlining -finline-limit=1200 -fno-merge-constants -fgcse-lm -fgcse-sm -fgcse-las -fgcse-after-reload -fira-region=mixed -fsched-spec -fsched-spec-load --param max-hoist-depth=30 --param max-inline-recursive-depth=8 -freorder-blocks -freorder-functions -falign-functions=1 -falign-loops=1 -falign-labels=1 -falign-jumps=1 -ftree-ch -ftree-loop-distribution -ftree-vect-loop-version -fipa-cp-clone -ftracer -fenforce-eh-specs foo.cc
This is probably not what you want. After spending four months or so of intensively compiling code for systems with small RAM and small processor cache, these flag options look grossly inefficient. If you're doing cloudy computing, credit card computing or working on any system with multiple cores then you'll very probably want to specifically set many of these flags to something more like:-
gcc --param ggc-min-heapsize=32768 --param ggc-min-expand=36 -fstrict-overflow -fno-signed-zeros -fno-signaling-nans -fno-trapping-math -fdefer-pop -fomit-frame-pointer -fstack-protector-all -fno-early-inlining -finline-limit=4 -fmerge-constants -fgcse-lm -fgcse-sm -fgcse-las -fgcse-after-reload -fira-region=one -fno-sched-spec -fno-sched-spec-load --param max-hoist-depth=6 --param max-inline-recursive-depth=0 -freorder-blocks -freorder-functions -falign-functions=64 -falign-loops=64 -falign-labels=1 -falign-jumps=1 -fno-tree-ch -fno-tree-loop-distribution -fno-tree-vect-loop-version -fno-ipa-cp-clone -fno-tracer -fno-enforce-eh-specs foo.cc
Depending upon what is being compiled, this can reduced stripped binary size by 1/3 or more. 4/5 has been observed in optimistic cases. Furthermore, this is not at the expense of speed. Execution time for regular expressions can be reduced by 22% and execution time for SQL stored procedures can be reduced by 45%. Compilation time and memory can also be reduced to the point that it is possible to self-host within 512MB RAM. As an example, with the addition of --no-keep-memory --reduce-memory-overheads to minimize compiler state, compilation of clang-3.8.1 using gcc-4.9.2 goes from exhausting a 2GB application space to requiring 360MB RAM.
The philosophy is to squeeze as much code into L1 cache while reducing cache misses. In many contemporary cases, L2 cache and L3 cache doesn't exist or has contention with hundreds of simultaneous users, possibly to the extent that cache affinity cannot be utilized. In such cases, available L3 cache may be smaller and slower than L1 cache.
Anyhow, let's pick off some quick wins. gcc and clang have multiple register allocation strategies. gcc's default is a nice conservative choice which compiles legacy code without being pathological. Unfortunately, for contemporary systems, this is a completely borked setting but it can be easily rectified with -fira-region=one. A setting which notably reduces compilation time is --param max-hoist-depth=6. I am under the impression that this setting reduces the bound for moving stuff out of loops. Why would we reduce this? It is, unfortunately, an O(n^2) process and code has to be seriously awful to require more than six hoists. A pathological case would be useless code inside nested XYZ loops. If you're compiling that then your program deserves to run slowly. Anyhow, reducing this bound makes a difference to compilation time without significantly affecting output. In combination with -fno-early-inlining, -fno-tree-vect-loop-version, -fno-ipa-cp-clone and -fno-tracer, program footprint can be reduced to the extent that compilation and execution time outweighs any disadvantage.
Perhaps some of these flags should be explained in more detail. Compilers typically perform a number of transforms on a program. Some of these transforms may be performed multiple times. A transform may be scheduled two or more times in a row or a transform may be repeated after many other transforms are applied. One of these transforms is inlining. In this case, it is typical to place the code of small subroutines directly in the place where they called. This saves a call and return. It also allows each inlined instance to be optimized into the surrounding code. For example, if each invocation uses a different constant, inlined copies may be optimized accordingly. -fno-early-inlining cuts out a transform which bloats compiler memory usage, compiler execution time, compiled program size and (unless you have exclusive use of a fat L3 cache) binary execution speed. Early inlining is a great optimization for a desktop application but it is a hinderance for almost every other case.
Another transform is loop unrolling. In the trivial case, a loop which iterates a fixed number of times can be re-written by a compiler. The content of the loop is duplicated a fixed number of times. This eliminates comparisons and branches. It also eliminates use of a loop register. Therefore, the contents of the loop (and functions called (or inlined)) have more registers available. Unless the number of iterations is high, this is a great optimization for processors with no instruction cache or very large caches. It is, however, counter-productive for systems with small caches or very high cache contention. In addition to unrolling there is vectorization. In this case, a compiler will attempt to perform pipelining and/or use SIMD instructions. Where it is not possible to determine alignment of vectorization at compilation time, gcc emits aligned and unaligned versions plus conditional code. This bloat can be inhibited with -fno-tree-vect-loop-version. And modification to the entry point of a partially rolled loop can be inhibited with -fno-tree-loop-distribution.
-fno-ipa-cp-clone inhibits currying. Currying is highly encouraged for interpreted languages. However, for a native binary running through a processor with a tiny instruction cache, it is a hindrance. With -O3 compilation, the default is to make multiple copies of functions which are deemed too large to inline and perform optimizations on each copy anyhow. This is inhibited with -fno-ipa-cp-clone. Admittedly, where functions are not curried, a processor performs extra work. However, it is assumed that clock cycles taken to perform extra work are less than clock cycles wasted by instruction cache misses and/or virtual memory paging. In practice, currying is useful for compilation of desktop applications and dedicated server applications but not useful elsewhere.
-ftracer is another source of bloat and interacts particularly badly with C++ templates. In this case, a compiler provides a separate function exit point for every conditional code path. The mass elimination of dimers allows a very large amount of flexibility with optimizations. However, this occurs at the expense of significant bloat. Such bloat is likely to be counter-productive away from a desktop environment. You may wish to apply -fno-tracer (and -fno-ipa-cp-clone) selectively. For example, on a mixed C/C++ project, such as MySQL Server, -fno-tracer (and -fno-ipa-cp-clone) works well when only applied to the C++.
--param max-inline-recursive-depth=0 inhibits unrolling of recursive functions. The recursive version of loop unrolling probably works really well on a Xeon but, on armv6t, it blows goats.
-fgcse-lm -fgcse-sm -fgcse-las -fgcse-after-reload prevents particularly boneheaded sequences of instructions occurring even when -O3 is replaced with -O0 or -Os. (We have a script with this functionality. This allows self-hosted compilation of MySQL Server 5.7.15, clang-3.8.1 and suchlike within 256MB RAM.) Similarly, -fdefer-pop redundantly retains a useful speed and size optimization when other optimizations degrade. For your purposes, it is very probable that these flags can be omitted without adverse effect.
-fno-signed-zeros allows additional float optimizations by assuming that IEEE754 +0.0 is equivalent to IEEE754 -0.0. -fno-signaling-nans -fno-trapping-math reduces bloat by eliminating divide by zero checks and suchlike which your interpreter, database or whatever should handle anyhow with application-specific code. -fstrict-overflow is badly named but enables additional integer optimizations on the assumption that undefined integer behavoir is not exploited. In practice, stuff like incrementing the largest integer still leads to undefined behaviour such as integer wrap-around.
It should be noted that -fstrict-overflow -fno-signed-zeros -fno-signaling-nans -fno-trapping-math -fmerge-constants -fno-enforce-eh-specs is strictly against C and C++ specifications. In practice, tolerant code and debug builds catch most of the problems. One exception is that python tests note the non-conformance of integer and float mathematics. Whereas, perl incurs no such problems and obtains faster execution speed. Whatever.
-fno-sched-spec -fno-sched-spec-load inhibits some shocking behaviour. Assuming a large cache and an advanced processor, there exist cases where it is beneficial to fetch data before a branch occurs. This remains beneficial even when one of the two code paths doesn't use the fetched data. What actually occurs on a processor with out-of-order execution is that a request to load a register occurs, a decision to perform a branch occurs, then a code path may use the data. In this case, data may be obtained partially or fully in the period taken a branch and subsequent instructions. That's the ideal case. On a simpler processor or a heavily loaded server, a costly and unnecessary cache miss is likely.
Parameters with magic numbers may not be optimal. These numbers are based on random walks and exponential decays - but tweaked for pragmatic reasons. I'd like to perform A/B testing to get empirical improvements.
-falign-functions=64 -falign-loops=64 -falign-labels=1 -falign-jumps=1 greatly improves execution speed on ARM6. Unfortunately, it bulks binaries by about 5% and this cost is potentially pushed to virtual memory. However, where RAM is sufficient, these parameters minimize cache line usage. Some ARM processors only have 128 cache lines. This can be maximized by aligning functions and loops to cache line boundaries. A worked example follows. An 80 byte loop without alignment may be placed across three cache lines. For example, 4 bytes at the end of one cache line, 64 bytes fully occupying one cache line and 12 bytes at the beginning of another cache line. This arrangement unnecessarily increases cache contention. If the loop is always placed at the start of a cache line, the total number of cache lines can be minimized. Technically, all four parameters can be set to 64 but this would bulk programs by another 5% while providing minimal gains. Conceptually, only back branches require alignment. So, subroutines, for and while are aligned. if, else, break, try and catch are unaligned. Consider the case of else inside a for loop. Both halfs of the condition will eventually be cached and therefore the total length of the loop is more important than alignment of its constituent parts. Indeed, for nested loops, I wish that it was possible to only align the outermost loop.
Inspiration for cache alignment comes from an explanation of the Dalvik virtual machine interpreter. In this case, Java bytecode re-compiled into Dalvik bytecode may run faster on an ARM processor even when not using Jazelle hardware acceleration. The trick hinges on one instruction: a jump indirect with six bit pre-scale. This is placed at the end of one cache line. This allows interpreter code for each trivial Dalvik instruction to fit within one cache line. In practice, relatively few cache lines are required to implement common Dalvik instructions. This makes a bytecode interpreter practical even on processors with 128 cache lines. Overall, this arrangement degrades gracefully and incurs less cache contention than Jazelle's partial hardware implementation of Java.
64 byte alignment will also work on x86. However, Intel instruction dispatch circuitry typically works on 16 byte chunks (with or without alignment). Therefore, smaller alignment or no alignment may be optimal on x86. However, if you want to set and forget one lot of flags, the suggested ones are likely to be an overall gain across multiple processor architectures.
Updated... see below.
I was at a concert a while ago and recorded, what I found out later, was a debut performance of a song. As I am friends with the lead singer, I'd like to send her a copy, but there are a couple of "issues".
Part 1: Editing
So, I've got a couple video files that I want to "process". I have no experience with video and only very limited experience with audio file manipulation.
Here is the pertinent data from ffprobe on the 25.6 MB introduction:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
Duration: 00:00:18.54, start: 0.000000, bitrate: 11342 kb/s
Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 11378 kb/s, SAR 1:1 DAR 16:9, 24.08 fps, 24.17 tbr, 90k tbn, 180k tbc (default)
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default)
Here is the pertinent data from ffprobe on the 1099.8 MB song itself:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'song.mp4':
Duration: 00:10:39.32, start: 0.000000, bitrate: 14092 kb/s
Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 14000 kb/s, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 95 kb/s (default)
(1) At the outset I had accidentally activated the wrong camera on my mobile phone. I'd like to keep the audio of the singer introducing the song.
(2) The video of the song actually contains TWO songs. I'm only interested in extracting the first song (the first 4m40s) from this video.
(3) Optional, but would be really nice, I'd like to make a "title" with the name of the group, name of the song, date, and location with the intro audio playing in the background.
(4) Ideally, I'd like to catenate the intro from (3) to the video from (2) and create a single file.
I'm running Windows 7 Professional. Have you ever done something like this? What free tools would you recommend?
Part 2: Shipping
I expect the final file to be about 500 MB, give or take. How would you recommend getting the file to her? It is rather large to send as an e-mail attachment. I do not have drop-box or one-cloud or any of the other file-sharing services. I'd like to keep the file private. I can't be the first who wants to do this. What options do I have?
Update(s):
Update 1: 20170101a - Happy New Year!
I've completed step (1) and extracted the audio for the intro to a separate file, intro.mp3, using:
ffmpeg -i intro.mp4 -ab 96k intro.mp3
Yeah, I know. 96kbps is not the greatest, but it's what was captured in the video, so I'm stuck with it.
Now to extract just the first 4m40s video of the song to a separate file. Looks like this could be done with ffmpeg?
Update 2: 20170101b
With many thanks to fn0rd666, got the "magic" incantation for ffmpeg:
fmpeg -ss 00:00:00.0 -i infile.mp4 -t 285 -codec copy outfile.mp4
Which means: start at the very beginning of the source, read from the file infile.mp4, copy 285 seconds, copy input straight to output (no transcoding), and send the output to the file: outfile.mp4!
Sixteen: The Final Chapter
It's that time of year again. The time of year when everyone and their dog waxes nostalgic about all the shit nobody cares about from the year past, and stupidly predicts the next year in the grim knowledge that when the next New Year comes along nobody will remember
that the dumbass predicted a bunch of foolish shit that turned out to be complete and utter balderdash. I might as well, too. Just like I did last year (yes, a lot of this was pasted from last year's final chapter).
Some of these links go to /., S/N, mcgrewbooks.com, or mcgrew.info. Stories and articles meant to ultimately be published in a printed book have smart quotes, and slashdot isn't smart enough for smart quotes.
As usual, first: the yearly index:
Journals:
Random Scribblings
the Paxil Diaries
2007
2008
2009
2010
2011
2012
2013
2014
2015
Articles:
Useful Dead Technologies Redux
The Old Sayings Are Wrong
How to digitize all of your film slides for less than ten dollars
GIMPy Text
The 2016 Hugo convention
Song
My Generation 21st Century
Santa Killed My Dog!
Book reviews
Stephen King, On Writing
Vachel Lindsay, The Golden Book of Springfield
J. D. Lakey, Black Bead
Scince Fiction:
Wierd Planet
The Muse
Cornodium
Dewey's War
The Naked Truth
The Exhibit
Agoraphobia
Trouble on Ceres
Last years' stupid predictions (and more):
Last year I said I wasn't going to predict publication of Voyage to Earth and Other Stories, and I was right, it's nearly done. So this year I do predict that Voyage to Earth and Other Stories will be published. I'm waiting for Sentience to come back from Motherboard, who's been hanging on to it since last February. I may have to e-mail them and cancel the submission if it isn't back by this February.
I'll also hang on to last year's predictions:
Someone will die. Not necessarily anybody I know...
SETI will find no sign of intelligent life. Not even on Earth.
The Pirate Party won't make inroads in the US. I hope I'm wrong about that one.
US politicians will continue to be wholly owned by the corporations.
I'll still be a nerd.
You'll still be a nerd.
Technophobic fashionista jocks will troll slashdot (but not S/N).
Slashdot will be rife with dupes.
Many Slashdot FPs will be poorly edited.
Slashdot still won't have fixed its patented text mangler.
Microsoft will continue sucking.
And a new one: DONALD TRUMP WILL (gasp) BE PRESIDENT IF THE US!!! God help us all! (He can't possibly be worse than George H. Bush or James Buchanan, can he?)
Happy New Year! Ready for another trip around the sun?