Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday May 26 2015, @12:30PM   Printer-friendly
from the life-is-easier-with-FOSS dept.

The European Union's interoperability page reports:

Using open source in school greatly reduces the time needed to troubleshoot PCs, [as indicated by] the case of the Colegio Agustinos de León (Augustinian College of León, Spain). In 2013, the school switched to using Ubuntu Linux for its desktop PCs in [classrooms] and offices. For teachers and staff, the amount of technical issues decreased by 63 per cent and in the school's computer labs by 90 per cent, says Fernando Lanero, computer science teacher and head of the school's IT department.

[...] "One year after we changed PC operating system, I have objective data on Ubuntu Linux", Lanero tells Muy Linux [English Translation], a Spanish Linux news site. By switching to Linux, incidents such as computer viruses, system degradation, and many diverse technical issues disappeared instantly.

The change also helps the school save money, he adds. Not having to purchase [licenses] for proprietary operating systems, office suites, and anti-virus tools has already saved about €35,000 in the 2014-2015 school year, Lanero says. "Obviously it is much more interesting to invest that money in education."

[...] The biggest hurdle for the IT department was the use of electronic whiteboards. The school uses 30 of such whiteboards, and their manufacturer [Hitachi] does not support the use of Linux. Lanero got the Spanish Linux community involved, and "after their hard work, Ubuntu Linux now includes support for the whiteboards, so now everything is working as it should".

[...] Issues [with proprietary document formats] were temporarily resolved by using a cloud-based proprietary office solution, says Lanero, giving the IT department time to complete the switch to open standards-based office solutions. The school now mostly uses the LibreOffice suite of office tools.

[...] "Across the country, schools have contacted me to hear about the performance and learn how to undertake similar migrations."

As I always say, simply avoid manufacturers with lousy support and FOSS is just the ticket.


[Editor's Comment: Original Submission]

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Marand on Tuesday May 26 2015, @01:09PM

    by Marand (1081) on Tuesday May 26 2015, @01:09PM (#187994) Journal

    Nice to see more adoption of open solutions instead of funneling education funds into licensing fees. Probably helps that the proprietary OS vendors are US companies, thanks to a combination of growing mistrust (Thanks, NSA) and knowing that any money that goes to Microsoft or Apple is basically just leaving the country to never be seen again. Even if (hypothetically) there's no cost savings going from licensing Windows to paying for knowledgeable Linux admins, spending more on locals that can handle administration helps their economy more than handing all that cash to a US company where it'll disappear into pockets and tax loopholes.

    I think that's likely a major factor in limiting US adoption, actually. If Windows and OS X were, say, Russian products, you can practically guarantee there would be a much stronger push in the US to adopt something else. As it is, they're "home turf" products, so it's easier to rationalise it away as supporting US businesses.

    A shame, really, because I think schools, especially, should be using open source, regardless of what's used elsewhere. An open system is better for a learning environment than a black box you aren't allowed to even look at too closely. Using a open source OS means you not only have an open system in the schools, but they can also provide liveCDs to the students and say, "Here, you can use it at home and learn how it works! Tear it apart, do whatever you like!" You know, the sort of thing you can't do with proprietary software unless you want the BSA at your door demanding money.

    Most students won't care, but even if only a handful benefit from it, it's a win. For the rest, at worst it won't hurt, because they'll still be able to pick up Windows or OS X easily enough later when needed, assuming they aren't already familiar.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by VLM on Tuesday May 26 2015, @01:27PM

    by VLM (445) Subscriber Badge on Tuesday May 26 2015, @01:27PM (#188003)

    they'll still be able to pick up Windows ... easily enough later when needed

    Something to think about is the death of the operating system and desktop environment paradigm.

    My workplace used to use native apps, but they got rid of the last one for general use a couple months ago. Everything happens in a web browser now except for legacy MS Office and they're trying to run away from that too. One special purpose app I still use if vmware vsphere client because theres a web interface but the admins can't set it up correctly or whatever so I'm stuck on the slow PITA client when I need to mess with cloud images.

    Figuring out how to use Windows as a chrome bootloader isn't much harder than figuring out how to use Linux as a chrome bootloader...

    Non-casual games are still native, legacy ms office is still native, weird hardware is still native (think FPGA development environment). Thats about it for native apps.

    One workplace advantage is once everything moves to a web browser, the execs with their apple laptops and ipads don't need any special handling. They get their desire of being expensive special people without increasing anyone elses workload, which is cool.

    Even just ten years ago, native windows apps used to be business critical. They're just gone now, replaced by web interfaces.

    In the tech field there's a weird "moth to flame" attraction of some subcultures for ever more elaborate desktop environments, meanwhile the actual users are becoming less interested in native apps and environments every day. Eventually this divergence is going to be even more comical than it already is.

    • (Score: 2) by opinionated_science on Tuesday May 26 2015, @01:32PM

      by opinionated_science (4031) on Tuesday May 26 2015, @01:32PM (#188006)

      I modded you funny for "their desire of being expensive special people".

      Priceless!!!!

    • (Score: 2) by Marand on Tuesday May 26 2015, @01:53PM

      by Marand (1081) on Tuesday May 26 2015, @01:53PM (#188017) Journal

      Something to think about is the death of the operating system and desktop environment paradigm. [...] Figuring out how to use Windows as a chrome bootloader isn't much harder than figuring out how to use Linux as a chrome bootloader...

      That's an interesting point, though I'd say it's pretty hard to beat Linux as a Chrome bootloader [wikipedia.org], or even a Firefox bootloader [mozilla.org] if that's more your style.

      You still get a better gaming bootloader out of a Wintendo [catb.org], though Valve is trying [wikipedia.org].

      • (Score: 2) by wantkitteh on Tuesday May 26 2015, @06:42PM

        by wantkitteh (3362) on Tuesday May 26 2015, @06:42PM (#188183) Homepage Journal

        This. I have successfully expunged all Windows machines from my life except for a couple of highly proprietary use cases at work and video games. I'll likely be taking my best shot at shifting my gaming to Linux by the end of the week, depending on deliveries. SteamOS gets first shot, just because I'm fairly sure I'll end up on Mint Cinnamon afterwards when I find non-SteamOS things I want to do. [dolphin-emu.org] Twitch streaming will likely happen along the way too, although OBS seems happier on Windows right now with less glitches in the encoded video stream (YMMV) and better plugin support, but I expect that'll change in time.

    • (Score: 3, Informative) by LoRdTAW on Tuesday May 26 2015, @02:15PM

      by LoRdTAW (3755) on Tuesday May 26 2015, @02:15PM (#188028) Journal
      • (Score: 2) by VLM on Tuesday May 26 2015, @03:50PM

        by VLM (445) Subscriber Badge on Tuesday May 26 2015, @03:50PM (#188083)

        You have to install something like the xylinx software to a machine connected to the FPGA board.

        You could make a FPGA "appliance" that you'd connect to via a web interface or VNC/rdesktop using a rasp pi maybe somehow? It would be slow.

        A truly amazingly advanced dev board could host a webserver that allows you to upload to the FPGA from a web interface, I suppose.

        • (Score: 1) by tftp on Tuesday May 26 2015, @04:06PM

          by tftp (806) on Tuesday May 26 2015, @04:06PM (#188091) Homepage

          Huh? But in case I understood you correctly, Xilinx FPGAs can be programmed by any OS that can talk to a USB serial port (FT2232H.) Lattice CPLD is programmed over JTAG, I2C, SPI. None of that is an OS-specific process. Xilinx supports Linux for ages. Also, Xilinx has partial reconfiguration - which does allow you to run a web server on the device itself. But it's more practical to use a boot Flash with multiple images and trigger full reconfiguration when done.

          • (Score: 2) by VLM on Tuesday May 26 2015, @04:22PM

            by VLM (445) Subscriber Badge on Tuesday May 26 2015, @04:22PM (#188108)

            None of that is an OS-specific process.

            All of that is an OS-specific process and can only be done from a local native application of some sort, even if its a java app that runs on anything with a JVM, hardwired to the physical board via USB cable. (and don't get me started on companies that fill their USB ports with silicone to prevent IP theft)

            None of those can be done, say, from my phone, over its wifi connection using nothing more than the phone's web browser connecting to a site. Unless you stretch definition into a windows box with the application installed on it and a VNC server and then any VNC client on the LAN can connect to it and run the native app remotely.

            In corporate IT land or academia land I need signed forms and permissions and evaluations for every individual box or individual user account that has an app installed. With a web interface I need one guy to get permission to plug it in, assign an address and DNS name, and then anyone in the company can type a url into their browser without first filing an IT permission slip.

            I know Xylinx supports linux natively as a native app, thats the only way I've ever programmed a FPGA or CPLD. I guess it works on windows and osx too although I've never tried it. AFAIK there is no dev board out there that plugs in an ethernet cable, it DHCPs and Bonjours itself, then you connect any ole web browser to http://something:80 [something] and theres a complete development environment without installing anything on the machine that runs the web browser.

            Its like the difference between running Outlook Express native client which is installed on the box, and Outlook webmail.

            • (Score: 1) by tftp on Tuesday May 26 2015, @05:03PM

              by tftp (806) on Tuesday May 26 2015, @05:03PM (#188140) Homepage

              I know Xylinx supports linux natively as a native app, thats the only way I've ever programmed a FPGA or CPLD. I guess it works on windows and osx too although I've never tried it. AFAIK there is no dev board out there that plugs in an ethernet cable, it DHCPs and Bonjours itself, then you connect any ole web browser to http://something:80 [something] and theres a complete development environment without installing anything on the machine that runs the web browser.

              That is because that "board" would have to be a complete PC with a good amount of RAM and a fast CPU. This is needed to do the "complete development environment" per your request. Place and route is not exactly instantaneous, especially on large parts and complex designs.

              There is also that little issue with licensing. There are free versions of the Xilinx SDK, but there are also large editions that include professional features. Those are not free. I guess one could buy a PC that is loaded with all the software... but why bother if you can buy the software separately and run it on any PC of your choice? Or maybe not one PC but a cluster? (Xilinx supports that for many years now.)

              There is only a limited need for a "board" that allows you just to upload a .bit or a Flash image file. That role is currently filled with any old PC that runs a Chipscope Server. Then you can connect to that PC over TCP/IP and do your thing. I don't believe that anyone manufactures a standalone Ethernet-enabled JTAG pod. Is there a market for that? I don't know; but if there is, it's a very small, niche market. Perhaps it is large enough for a one-man company. But the price of the board has to be lower than the price of a Digilent JTAG module plus the R-Pi.

              • (Score: 2) by VLM on Tuesday May 26 2015, @05:33PM

                by VLM (445) Subscriber Badge on Tuesday May 26 2015, @05:33PM (#188156)

                Yes, although its important to remember that WRT screwing around, I'm not doing anything more technologically impressive than state of the art in 2005, so a single chip computer in 2015 with the specs of a decent 2005 desktop "pi-like" would be more than capable enough for advanced hobbyists and uni students. People were doing "real work" in '95 with '95 class hardware, so screwing around in '05 wasn't very challenging, and now that '05 class hardware fits on a single $5 chip in '15...

                I mean, seriously, my first "hello world" project just to prove the toolchain is up and running correctly was a single bit full adder, three switch inputs and two LED outputs. I think that was on the first generation Digilent CPLD board. My first discovery was the LEDs were active low, LOL. I would imagine a stereotypical uni class is equally technologically unambitious. Its not like semesters are longer or kids are smarter than the very recent past. And the SoC of 2020 that has the specs of a 2010 desktop will be even more ridiculously overpowered when doing things that would have been challenging in '95. So in the long run it seems inevitable?

                Another interesting sideline with students and hobbyists, is time is not money. So if it takes 5 minutes to compile a Z80 core like it used to in 2005 or whatever it took exactly, well, its not like uni students are highly paid...

        • (Score: 2) by LoRdTAW on Tuesday May 26 2015, @04:25PM

          by LoRdTAW (3755) on Tuesday May 26 2015, @04:25PM (#188109) Journal

          You can interface with an FPGA over SPI or even I2C to upload a bitfile. An arduino with the proper library could upload bitfiles from an SD card or even over BT or wifi. Uploading bitfiles on the fly from a Rpi is not big deal either. But, you can't compile the bitfiles on a Rpi simply because there is no ARM port of the dev tools, yet.

          The only instance when you might need Windows is if the dev board manufacturer did not release a Linux driver for their boards integrated serial/jtag adapter. That is just laziness and those boards should be avoided if you need Linux support. Sometimes, they also have Windows only tools for peripheral configuration like writing a binary image to onboard flash. Worst case, you can use a VM running Windows for those tools.

          I have developed for both the Digilent Nexys 2 and the Terasic DE0-nano on Linux.

    • (Score: 2) by kaszz on Tuesday May 26 2015, @02:19PM

      by kaszz (4211) on Tuesday May 26 2015, @02:19PM (#188030) Journal

      Even just ten years ago, native windows apps used to be business critical. They're just gone now, replaced by web interfaces.

      Too bad if anything were to happen with the internet connection or any of your documentation is secret.

      • (Score: 2) by VLM on Tuesday May 26 2015, @03:59PM

        by VLM (445) Subscriber Badge on Tuesday May 26 2015, @03:59PM (#188085)

        Web doesn't necessarily imply public access. It pretty much does for consumer stuff like "internet refrigerator" or "internet thermostat", admittedly.

        Plenty of engineering tools at work live on RFC1918 addresses with no external NAT access and provide a tasty easy to use web interface.

        In "the really old days" we used to have things like remote test gear connected by honest to god serial ports and modems for "remote" and in the 90s I got involved in projects to put them on terminal servers so we could just telnet from any desktop machine on our LAN. Since the turn of the century all that is gone and everyone ships web clients.

        Even our hourly employee timeclock is now a website. All our reporting systems. All our UPS'es. Transfer switches and their battery chargers. 20+ years ago logging was a bunch of RS232 alert lines feeding to a dot matrix line printer, thats all online via web for monitoring subsystems now. I don't have access to it but I'm told the HVAC "front panel" is entirely virtual online now.

        All that stuff, 10+ years ago, would have been a native desktop app. I remember having to support them and having to fight IT to install our engineering applications. Its a lot easier to get permission to connect to the engineering/production network and just access a certain URL in a web browser.

        • (Score: 2) by kaszz on Tuesday May 26 2015, @04:10PM

          by kaszz (4211) on Tuesday May 26 2015, @04:10PM (#188095) Journal

          The problem with website style access is that it's a poor design to interact with for other software. So the whole software-to-software communication becomes a huge and lengthy hurdle.

          • (Score: 2) by kaszz on Tuesday May 26 2015, @04:13PM

            by kaszz (4211) on Tuesday May 26 2015, @04:13PM (#188101) Journal

            s/lengthy/unreliable/

          • (Score: 2) by VLM on Tuesday May 26 2015, @04:28PM

            by VLM (445) Subscriber Badge on Tuesday May 26 2015, @04:28PM (#188113)

            The unix philosophy of small cooperative tools is actively opposed. Thus the giant monolith.

            In the old days of telnet servers and RS-232 connections I was the guy stuck writing "EXPECT" scripts to automate work.

            The modern solution is presenting some kind of REST ish standard ish API. Being a standard there are of course like 15 incompatible standards. But machine to machine automation is hardly impossible over the web. I've been stuck doing all sorts of SOAPy WSDLy foolishness over the years.

            • (Score: 2) by kaszz on Tuesday May 26 2015, @04:41PM

              by kaszz (4211) on Tuesday May 26 2015, @04:41PM (#188127) Journal

              Not impossible. Just very cumbersome.

              It's almost like.. oh this webinterface is messy. Fix it? nah. Slam OpenWRT etc.. onto and be done with it using some script on the device.

        • (Score: 2) by q.kontinuum on Wednesday May 27 2015, @08:29AM

          by q.kontinuum (532) on Wednesday May 27 2015, @08:29AM (#188523) Journal

          Web doesn't necessarily imply public access.

          It usually means giving access to the documents to some cloud-service-provider. Big companies can run their own servers etc., but smaller businesses won't. If I found out my tax-adviser, physician, priest, therapist or other business partner was managing my personal data at googledocs or another cloud service, I'd look for another one,

          It pretty much does for consumer stuff like "internet refrigerator" or "internet thermostat",

          For consumers it definitely means entrusting their data to some other companies, usually US-based. This is a no-go for me. The US made it pretty clear that, while they might consider to eventually obey privacy laws regarding their own citizens, foreigners don't have any such privileges.

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 3, Informative) by Phoenix666 on Tuesday May 26 2015, @02:25PM

      by Phoenix666 (552) on Tuesday May 26 2015, @02:25PM (#188033) Journal

      For me the problem with doing away with native apps in favor of cloud versions is that connectivity in the United States is crap. I have a business connection in NYC and it's still chancy. The incredible security and privacy nightmare of the NSA and its brethren criminal organizations aside, productivity would take a hit on a regular basis if I had to rely on the cloud for anything. Maybe it's a different story in parts of the world that have 1st World broadband, like South Korea, but the United States does not seem to be cloud-ready.

      --
      Washington DC delenda est.
      • (Score: 2) by kaszz on Tuesday May 26 2015, @04:46PM

        by kaszz (4211) on Tuesday May 26 2015, @04:46PM (#188130) Journal

        So USA is still a 1st world country? ;-)

      • (Score: 2) by VLM on Tuesday May 26 2015, @04:56PM

        by VLM (445) Subscriber Badge on Tuesday May 26 2015, @04:56PM (#188137)

        None of the browser driven apps at my employer are commercial public cloud, at least not that I can think of.

        A $250K machine with an embedded web interface (think of a stereotypical network printer, but running an engineering tool or production machine instead of a boring printer), OK.

        A box provided by the manufacturer that plugs into our network and supposedly should be treated as an appliance although its really just a windows or linux install with apache and some support code. An example of this architecture (that we actually don't use) would be github enterprise. OK.

        A virtual image on the private vmware cluster and the private NAS farm. OK. I have like 20 of them doing various things. None of them have any public access. I don't even know where they're located today although I think they're in the midwest somewhere. They could be at the coastal centers again for all I know. It really doesn't matter.

    • (Score: 2) by novak on Tuesday May 26 2015, @11:33PM

      by novak (4683) on Tuesday May 26 2015, @11:33PM (#188349) Homepage

      Not really disagreeing with you but adding to the list of areas where native applications are required: Engineering. I worked in a place that did a fair bit of CFD simulation, as well as some mechanical and thermal FEA. For data processing on that level you really require as much memory as you can get- that's actually what determined how big of a model you could load, everyone had 12GB RAM on their desktop but there were a few community boxes at 96GB to 128GB RAM for more serious processing. You would never ever want to add the overhead of a browser on top of that.

      The simulation clusters themselves also needed a lot of RAM, typically several GB per core, with more memory on the head node for partitioning. And the clusters, the clusters ran linux (redhat, unfortunately, but redhat 5 at least). Over the years I worked there we had only one windows cluster, and it was a total nightmare: if the machines were not patched exactly in lockstep the simulations would crash. Various windows updates affected the simulation performance significantly, often fatally, and the cluster generally failed to run above about half what our linux clusters did.

      So basically: yes, the desktop is dying, and where it lives on linux is much more fit to survive (except, of course, the legacy MS Office VBA plugins, may they perish in fire).

      --
      novak