Alienware 17 R4 2017 Gaming Laptop Review and more   
Here a roundup of todays reviews and articles: Adata\'s Ultimate SU900 256GB SSD reviewed Alienware 17 R4 2017 Gaming Laptop Review: Powerful And Refined AMD\'s new Ryzen 3 could beat Intel\'s Core i3 processors ECS Z270-Lightsaber Review G.Skill Announces Quad-Channel DDR4-4200 Kit for Intel Skylake-X CPUs Power Consumption & Thermal Testing With The Core i9 7900X On Linux SteelSeries Arctis 5 Review TP-Link TL-PA9020P AV2000 2-Port Gigabit Passthrough Powerline Adapter Kit Review...
          Hire a Linux Developer by aalesh28   
Embedded C based project on a 32 bit Linux processor. Someone with the knowledge of how the ROP (return oriented programming) is done and shellcodes. Not a major project, have done most of the work, just need help with a little bit of trouble shooting... (Budget: $250 - $750 USD, Jobs: C Programming, Embedded Software, Linux, Software Development, Ubuntu)
          Wing FTP Server For Linux(64bit) 4.9.1   
Secure and powerful FTP server software for Windows, Linux, Mac OSX and Solaris
          Linux cp command tutorial for beginners (8 examples)   
If you are new to Linux, it's worth knowing that command line is a very powerful tool, capable of doing almost all those tasks that you can do through the graphical interface. The Linux cp command provides you the power to copy files and directories through the command line. In this tutorial, we will discuss the basic usage of this tool using easy to understand examples.
          Уязвимость в systemd представляет опасность для многих дистрибутивов Linux   


Разработчик Canonical обнаружил баг в составе systemd, который позволяет перехватить контроль над устройством посредством DNS-пакетов.
          Auxiliar de Informática - Brima Serviços - Belém, PA   
Formatação de computadores, instalação de sistemas operacionais (Windows/Linux), instalação de softwares, suporte técnico remoto e presencial....
De Indeed - Fri, 05 May 2017 11:22:32 GMT - Visualizar todas as empregos: Belém, PA
          Diplom-Ingenieur /in oder Master of Science für die Sektion „Geophysik“   
Das Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung ist  eine von der Bundesrepublik Deutschland, der Freien Hansestadt Bremen und den Ländern Brandenburg und Schleswig-Holstein getragene Forschungseinrichtung mit rund 1.000 Mitarbeiterinnen und Mitarbeitern. In einem breiten multidisziplinären Ansatz betreiben wir Polar- und Meeresforschung und leisten dabei im Verbund mit zahlreichen universitären und außeruniversitären Forschungseinrichtungen einen wichtigen Beitrag zur globalen Umwelt-, Erdsystem- und Paläoklimaforschung. Das AWI gehört zu den weltweit führenden Institutionen, die kontinuierlich Fächersonardaten in der Arktis und Antarktis erheben. Die bereits vorhandene umfangreiche bathymetrische Datenbank für beide Polarregionen soll auch in der Zukunft kontinuierlich ergänzt bzw. im Rahmen von internationalen Kooperationen zur Verfügung gestellt werden.  Aufgrund einer Neubesetzung im Rahmen der Haushaltsfinanzierung sucht der Fachbereich „Geowissenschaften“ für die Sektion „Geophysik“ ab sofort eine/n
Diplom-Ingenieur /in
oder Master of Science
in Geomatik, Geodäsie oder Geoinformatik (bevorzugt mit dem fachlichen Schwerpunkt Hydrographie) Aufgaben:
Durchführung bathymetrischer Vermessungen mit den Fächersonarsystemen der großen Forschungsschiffe des Institutes. Der Schwerpunkt liegt dabei auf Arbeiten in den Polargebieten mit FS Polarstern. Das Anforderungsprofil deckt den gesamten Workflow, von der Messung über die Datenbearbeitung/Datenvisualisierung bis hin zur dauerhaften Archivierung der Daten, ab. Im Detail ergeben sich folgende Aufgaben: • Technische Betreuung des Fächersonarsystems auf Polarstern • Planung und Durchführung der Vermessung, Bearbeitung der Daten und Erstellung von Arbeitskarten zur Unterstützung der weiteren wissenschaftlichen Arbeiten an Bord • Enge Zusammenarbeit mit marin-geologisch, geophysikalisch, biologisch und hydrographisch arbeitenden Gruppen im Rahmen der am AWI angesiedelten wissenschaftlichen Programme • Erstellung von bathymetrischen Karten aus digitalen Geländemodellen für weiterführende Interpretationen • Fortführung des IT-Konzeptes für die Erfassung und Bearbeitung der Sonardaten an Bord und für die Bearbeitung und Archivierung der Daten im Institut • Betreuung und Pflege der Hard- und Software der zugehörigen Rechnersysteme an Bord und im Institut Voraussetzungen:
Diplom oder Master of Science in Geomatik, Geodäsie oder Geoinformatik (bevorzugt mit dem fachlichen Schwerpunkt Hydro-graphie) • Erfahrung in der Durchführung von hydrographischen Vermessungen • Gute Kenntnisse der Softwareprogramme Caris HIPS/SIPS, IVS Fledermaus und ESRI ArcGis • Englischkenntnisse in Wort und Schrift • Bereitschaft und Eignung zur Teilnahme an mehrmonatigen Schiffsexpeditionen • Gute Kenntnisse des Betriebssystems MS Windows sowie Erfahrung in der Arbeit mit Computern unter einem Unix oder Linux Betriebssystem • Kenntnisse mindestens einer Programmiersprache als Grundlage für die Pflege vorhandener Software und zur Erstellung eigener einfacher Programm Weitere fachliche Informationen erhalten Sie bei Herrn Boris Dorschel (boris.dorschel@awi.de). Die Stelle ist auf zwei Jahre befristet mit der Option auf Entfristung. Die Vergütung ist abhängig von Ihrer Qualifikation sowie den Ihnen übertragenen Aufgaben bis zu Entgeltgruppe 13 möglich und richtet sich nach dem Tarifvertrag für den öffentlichen Dienst des Bundes (TVöD-Bund). Der Dienstort ist Bremerhaven. Wir bieten ein multidisziplinäres internationales und spannendes Arbeitsumfeld mit flexibler Arbeitszeit, modernstem Forschungsequipment und einer erstklassigen Infrastruktur. Das AWI strebt die Erhöhung der Anzahl unserer Mitarbeiterinnen mit (Fach-) Hochschulabschluss an und fordert daher Frauen ausdrücklich zur Bewerbung auf. Schwerbehinderte Bewerber/innen werden bei gleicher fachlicher und persönlicher Eignung bevorzugt. Über verschiedene Maßnahmen wird gezielt die Vereinbarkeit von Beruf und Familie gefördert. Aufgrund unserer familienbewussten Personalpolitik wurde uns das Zertifikat zum Audit „Beruf und Familie“ verliehen. Bitte richten Sie Ihre Bewerbung mit den üblichen Unterlagen (Lebenslauf, Zeugnisse und Tätigkeitsnachweise; Referenzliste) unter Angabe der Kennziffer 99/G/Geo-tt bis zum 15. August 2017 per E-Mail (alle Unterlagen in einer PDF-Datei kombiniert) an: personal@awi.de.
          Functional Jobs: OCaml server-side developer at Ahrefs (Full-time)   

What we need

Ahrefs is looking for a backend developer with a deep understanding of networks, distributed systems, OS fundamentals and taste for simple and efficient architectural designs. Our backend is implemented mostly in OCaml and some C++, as such proficiency in OCaml is very much appreciated, otherwise a strong inclination to intensively learn OCaml in a short term will be required. Understanding of functional programming in general and/or experience with other FP languages (F#,Haskell,Scala,Scheme,etc) will help a lot. Knowledge of C++ and/or Rust is a plus.

Every day the candidate will have to deal with:

  • 10+ petabytes of live data
  • OCaml
  • linux
  • git

The ideal candidate is expected to:

  • Independently deal with bugs, schedule tasks and investigate code
  • Make argumented technical choice and take responsibility for it
  • Understand the whole technology stack at all levels : from network and userspace code to OS internals and hardware
  • Handle full development cycle of a single component - i.e. formalize task, write code and tests, setup and support production (devops), resolve user requests
  • Approach problems with practical mindset and suppress perfectionism when time is a priority
  • Write flexible maintainable code and adapt to post-launch requirements’ tweaks

These requirements stem naturally from our approach to development with fast feedback cycle, highly-focused personal areas of responsibility and strong tendency to vertical component splitting.

Who we are

Ahrefs runs an internet-scale bot that crawls the whole Web 24/7, storing huge volumes of information to be indexed and structured in a timely fashion. Backend system is powered by a custom petabyte-scale distributed key-value storage to accommodate all that data coming in at high speed. The storage system is implemented in OCaml with thin performance-critical low-level part in C++. On top of that Ahrefs is building various analytical services for end-users.

We are a small team and strongly believe in better technology leading to better solutions for real-world problems. We worship functional languages and static typing, extensively employ code generation and meta-programming, value code clarity and predictability, and are constantly seeking to automate repetitive tasks and eliminate boilerplate, guided by DRY and following KISS. If there is any new technology that will make our life easier - no doubt, we'll give it a try. We rely heavily on opensource code (as the only viable way to build maintainable system) and contribute back, see e.g. https://github.com/ahrefs . It goes without saying that our team is all passionate and experienced OCaml programmers, ready to lend a hand and explain that intricate ocamlbuild rule or track a CPU bug.

Our motto is "first do it, then do it right, then do it better".

What you get

We provide:

  • Competitive salary
  • Informal and thriving atmosphere
  • First-class workplace equipment (hardware, tools)
  • Medical insurance

Locations

Singapore : modern office in CBD

USA : cozy loft in San Francisco downtown

Get information on how to apply for this position.


          Exclusive: India presses Microsoft for Windows discount in wake of cyber attacks   

By Euan Rocha

MUMBAI (Reuters) - India is pressing Microsoft Corp to offer a sharply discounted one-time deal to the more than 50 million Windows users in the country so that they can upgrade to the latest Windows 10 operating system in the wake of ransomware attacks.

Microsoft officials in India have "in principle agreed" to the request, Gulshan Rai, India's cyber security coordinator, told Reuters over the phone on Friday.

A spokeswoman for Microsoft in India declined to comment on the matter. Officials at the company's headquarters in the United States and regional headquarters in Asia also declined to comment.

If Microsoft agreed to such a discount, it could open up the global software giant to similar requests from around the world. Rai said the government was in talks with Microsoft management in India. It is not immediately clear whether any other countries were seeking similar deals.

Rai said India began talks with Microsoft after the WannaCry ransomware attack last month, noting that both WannaCry and this week's attack, dubbed by some cyberexperts "NotPetya", exploited vulnerabilities in older iterations of the Windows OS.

"The quantum of the price cut, we expect some detail on in a couple of days," Rai said, adding the Indian government expected the company to offer the software at "throw-away prices."

"It will be a one-time upgrade offer to Windows 10 and it will be a discounted price for the entire country," said Rai, who was hand-picked by Indian Prime Minister Narendra Modi to be the country's first cyber security chief.

Rai declined to be more specific, but said he was confident that it would be "less than a quarter of the current price."

Rai, who has over two decades of experience in different IT areas including cyber security, said his team began coordinating with government agencies and regulators to push for OS upgrades soon after the WannaCry attack began on May 12.

The government's quick action helped minimize the impact of the NotPetya attack, which affected two of India's container port terminals, he said.

The government has also worked with banks to ensure that some 200,000 of the more than 240,000 ATMs in the country, most of which run on older Windows XP systems, have been upgraded with security patches released by Microsoft following the WannaCry attack, Rai said.

This is just an interim solution, however, said Rai, because although the patches fix vulnerabilities in older OS versions, they retain the limitations of those versions.

"New OS versions have different architecture, much improved architecture and much more resiliency," said Rai.

PRICE-SENSITIVE

Windows 10 Home currently retails for 7,999 rupees ($124) in India, while the Pro version of the software typically used by large companies and institutions costs 14,999 rupees ($232).

Roughly 96 percent of an estimated 57 million computers in India currently run on Windows, according to Counterpoint Research. Apple- and Linux-based systems account for the rest.

Given that only a small minority of Windows users in India already have Windows 10, Microsoft could be forgoing several billion dollars of potential revenue if they agreed to sell just the more widely used Home version of Windows 10 at a quarter of its current Indian retail price.

In the price-sensitive Indian market, people using computers in households or small businesses often do not upgrade their OS given the steep costs. The wide use of pirated Windows OS versions, which would not automatically receive security patches, exacerbate the vulnerabilities.

In light of the attacks, Rai said, the government "wants to incentivise the common man to upgrade their systems".

The WannaCry attack in May affected a state-run power firm in western India, while the NotPetya attack this week crippled operations at two port terminals in India operated by shipping giant AP Moller Maersk, which was affected globally.

($1 = 64.5175 Indian rupees)

(Additional reporting by Sankalp Phartiyal in Mumbai, Salvador Rodriguez in San Francisco and Jeremy Wagstaff in Singapore; Editing by Sonya Hepinstall)


          Canonical warnt vor gravierender Sicherheitslücke in Linux   
Unter Umständen lässt sich darüber Schadcode einschleusen und ausführen. Die Schwachstelle ermöglicht aber auch Denial-of-Service-Angriffe. Sie steckt im Hintergrundprogramm systemd. Ubuntu und Debian bieten bereits Patches an
          RE   
I have said before that RISC OS has found itself colliding with modern requirements, causing a lot of comparison with Windows and other operating systems. I believe that this kind of comparision is a detriment to the platform. RISC OS isn't Windows or Linux or OS X, it shouldn't try to be those things. Once you start trying to be Windows, you will fall short and feel that things are indaequate. I don't think that RISC OS is inadequate. It's a very nice Operating System, good for emebedded spaces and very low power consumption / cooling. It runs on ARM chips, available in many PDAs. I feel that if RISC OS tries to be Windows, it will die or stay in a perpetual state of inadequancy (Amiga). RISC OS should instead be focusing on doing what it does best, Castle and ROS could be making mintage off of expensive embedded computer contracts in the industrial manufacturing industry. edit: PS. Good article, the author is honest but humble in his opinion. Windows/Mac/Linux users could learn something here. The author should have linked to the specific RISC OS browsers though, instead of bypassing them for Firefox. For example, this article on Drobe, compares and contrasts the main browsers for RISC OS: 2006-12-07 12:48
          ARX and other thoughts   
To me the biggest problem with RISC OS as a modern platform is its fragility. The lack of pre-emptive multi-tasking and decent memory protection are big problems. These problems IMHO limit its usefulness as an embedded or PDA OS. Indeed IMHO RISC OS is probably a worse starting point for a PDA OS than Linux. The GUI is not at all suited to pen-based working, so it would need to be replaced. (PDA pens don't have buttons, so what do you do for menus?) What that leaves you with that is of use is the kernel, Filer, Font Manager, and Draw Module. Whilst the Font Manager used to kick arse, it's now looking quite dated, and the Draw Module was always lacking. What's sad is that Acorn were developing another OS for the Archimedes. It was called ARX, and was being developed at the Acorn Palo Alto Research Centre. It was to be a modern OS, with memory protection and pre-emptive multi-tasking like Unix, with a GUI similar to Mac OS - the guys working on it were experts in OS design. Unfortunately the project was poorly managed (as most Acorn projects were). Management decided to kill the project because the predicted finish date was long after the launch date for the Archimedes - Arthur was thrown together in a hurry, and the rest is history. For those that don't know, Arthur was essentially developed by a bunch of BBC Micro games developers who had little experience in OS design. I believe that none of the folks that had been working on ARX worked on Arthur. It was designed to be compatible (to an extent) with the earlier BBC Micro OS - much of the early software on Archimedes machines was ports of BBC Micro apps. It was never really designed to be a serious OS. IMHO what Acorn should have done was get in some decent management for the ARX project. Had they done that they'd have ended up with a serious computer system and things may have turned out differently. They could potentially have competed in the spaces that Unix and Mac OS were dominating. Unfortunately Arthur meant they were only suited to the education and hobbyist market.
          RE[3]: ARX and other thoughts   
Linux is losing market share! Apple Users are homos! Windows XP crashes all the time! MSWord is a standard! Amiga users are in denial! OSnews is biased towards Apple/Ubunutu/Microsoft! Vista is just XP-SP3/a new skin! Ubuntu runs fine on 128MB! Firefox leaks memory! Novell are the new Microsoft! It's not open source, I'm not interested! EVERYTHING must be GPL! Will that do? :3
          RE[4]: ARX and other thoughts   
You forgot the part! where Linux has less than 0.000001% of server sales! And is declining! because it is written by malodorous commie twentysomethings! in their parents' basements! YEAR AFTER YEAR AFTER YEAR!
          RE[5]: ARX and other thoughts   
The only thing year after year after year around here is the year of the linux desktop! :3
          Is it just me...   
I would welcome an article that DID provide an opinion on some ways in which RiscOS could be improved to "drag ex-users back", but is it just me, or is the part where the author claims to write about that, just about shared-sourcing (which could be otherwise described as "look-but-don't-touch") the OS? Never mind the fact I don't think shared source is going to get anyone anywhere (the power of open-source comes from the fact that anyone competent to modify it can do so) - even if I thought shared sourcing were viable, not even open-sourcing is a magical formula. Torvalds, Cox et al can program till the cows come home, but if Linux had no momentum then people like HP wouldn't be moving to support Debian.
          RE[6]: ARX and other thoughts   
Buddy, we're fast approaching the tenth anniversary of the Year of My Linux Desktop. As for that other OS, or any other, it won't be the Year of My Other OS Desktop until I can get decent hardware support, stability, decent software support, and DRM-, bloat- and FUD-less operation, all in the same package. Open source would be nice, too.
          RE: Is it just me...   
Adding features to the OS will not "drag ex-users back". The real underlining problem is in the groups of people trying to create/support the OS. I dumped my Amiga because not only was it not improving, but they were going in the direction of expensive hardware that was not flexible in what it supported, plus the old 'add features till it breaks' approach that told me they were lost without a firm goal. Worse there was not even a working base of code, rather two different code bases being developed at the same time that were not compatible with each other. Presently I am a BeOS fan looking forward to Haiku. Progress has been slow, but it has been steady too. It is important to note that all the other BeOS clone projects that planned to out-BeOS BeOS collapsed under the weight of their ambitions, while the ports to Linux stalled on the amount of interfacing code needed. Haiku planned for a single, reachable goal and is still going strong. Amiga, RiscOS and others main problems do not seem to be in the nature of the OSes but rather the in the nature of the people controlling the development.
          RE[8]: ARX and other thoughts   
The open source OpenOffice and Firefox packages (which are also available for Windows) don't have kernels, let alone ones you have to compile. And you don't have to compile the kernel on most distros anyway. It's FUD like that which prevents Linux users taking Windows users seriously. The sad part is that although my memory may be faulty on the point, I seem to remember that tomcat wasn't always so full of anti-FOSS/Linux crap.
          RE[9]: ARX and other thoughts   
OK, my memory be faulty. Looking through his comment history I see that when I said "I seem to remember that tomcat wasn't always so full of anti-FOSS/Linux crap", I must have been thinking of someone else.
          RE[3]: How do I try it out?   
I believe there is one free one, but I don't think the emulation is that complete. And IIRC it's only really functional on Windows, so if you have Linux or a Mac, tough luck!
          RE[9]: ARX and other thoughts   
The irony is that I use Linux and OS X on a daily basis. But I'm a little tired of the constant mud-slinging by OSS fanatics (such as yourself) against anything which offends their ideological sensibilities (ie. Anything But Microsoft (ABM), anti-DRM, anti-patents, anti-trademarks, anti-hygiene, anti-cool, etc) -- so I'm merely providing another missing voice from the discussion. Don't like it? Too bad.
          What RISCOS needs...   
It needs a huge injection of cash that's what. Anyone got £200 Million going spare? It needs a 3GHz ARM chip to overcome the use of ports like FireFox or Open Office which run far to slow to be useable just now. If it had a fast ARM system with a modern 3D graphics chip and hardware FPU then it could just about survive. Once it has the right hardware for the current level of demand that users would like, then the software from Linux could be ported and still run at a decent rate. But as this it isn't going to happen, so why don't we just let this wonderfully old OAP OS live out the rest of its life in a Home and die gracefully? I for one will miss it.
          RE[11]: ARX and other thoughts   
Oh, and since when was Windows "cool"? Who cares about Windows? I use Linux, OS X, and Windows. As for "anything but Microsoft", you don't see me singing the praises of (e.g.) the HURD or Syllable. So it's not so much "anything but Microsoft"; rather "anything but pervasive crap". Oh, puh-lease. You're so transparent -- and your posts speak loud and clear on your anti-MS bias.
          RE[12]: ARX and other thoughts   
I didn't say I didn't have an anti-MS bias, I said I wouldn't necessarily use something just because it's not Microsoft. If you can't post coherent arguments without twisting my words or insulting me, then you needn't bother, because I won't take you seriously. There are plenty of things besides MS I wouldn't use either. But I do believe that my own mentioning of "Syllable" today is the first time I have been reminded of it in about a month. That certainly isn't the case where Windows is concerned, despite the fact that if I want it to be, it should be. And the fact that I have an anti-MS bias does not mean I am incapable of judging MS-related facts correctly. In fact it's quite possible that my ABILITY to do so is exactly what LEADS to my anti-MS bias. Finally, there's no reason why I shouldn't take advantage of my "anti-MS bias" to counter all the anti-Linux FUD around here.
          RE[4]: ARX and other thoughts   
you neglected to accuse all Linux users of being raving fanatical members of the "cult". notparker, you can keep quiet now, I said it for you.
          Comment on jb6 by Kristopher   
Skype has opened up its website-dependent client beta on the world, right after establishing it largely from the United states and U.K. previous this month. Skype for Web also now can handle Linux and Chromebook for immediate messaging communication (no video and voice yet, these need a plug-in installment). The increase of the beta contributes help for a longer list of languages to aid strengthen that global functionality
          Comment on wreck alley scuba diving by Franchesca   
Skype has opened its website-dependent consumer beta towards the entire world, after starting it broadly from the United states and U.K. before this 30 days. Skype for Internet also now can handle Chromebook and Linux for immediate messaging connection (no voice and video yet, individuals demand a connect-in installment). The increase in the beta provides assist for a longer set of dialects to aid reinforce that international usability
          El thriller psicológico The Long Reach llegará a PS4, Xbox One, PC y Switch   
Bebe de series como Fargo o True Detective.

Painted Black Games, un estudio independiente afincado en Ucrania, ha anunciado su thriller psicológico con elementos de acción y supervivencia, The Long Reach, el cual supone el videojuego debut del equipo. Llegará a finales de 2017 a consolas como Xbox One, PS4 y Switch, además de a PC, Mac y Linux.

Vídeo:



Una aventura con puzles, supervivencia y muchas influencias


The Long Reach, que comenzó su desarrollo como un videojuego para móviles que después se convirtió en algo más grande, nos contará la aventura de Control Steward, un científico e investigador, que se verá rodeado de un evento catastrófico: la gente se ha vuelto loca. Nos trasladará a Baervox, un pueblo ficticio del interior de Estados Unidos. Allí, un instituto científico ha conseguido desarrollar una forma de trasladar y aprender todo el conocimiento posible en apenas segundos de un cerebro a otro. Desgraciadamente, todo sale mal. La tecnología se expande demasiado rápido y la gente pierde la cabeza, sumiendo en el caos a toda la población.


Sus responsables comentan que el videojuego tendrá partes de aventura, sigilo, acción y múltiples puzles, dotándole al conjunto del título una experiencia cercana a lo que sería un juego de supervivencia con estilo pixel-art. Además, el contenido narrativo tendrá especial peso, con más de 12.000 líneas de diálogo en un guión que será trascendental, con más de 20 personajes principales, objetos y enemigos que gozarán de vida propia, pasado y personalidad.

Los enemigos, enajenados por una enfermedad tecnológica, no serán simples 'zombis': tendrán historias, familias y trasfondos creíbles

The Long Reach bebe de sagas y juegos como Resident Evil, así como de películas y series de la talla de Matrix, El único superviviente, True Detective o Fargo. Llegará a finales de 2017 a consolas y PC.

El thriller psicológico The Long Reach llegará a PS4, Xbox One, PC y Switch
          RE[16]: ARX and other thoughts   
We've already been through this. What you consider "praise" is always a tepid "Microsoft's approach exists ... but it's far worse than anything on Linux or OS X..." Which is to say, your "praise" isn't praise at all.
          RE[17]: ARX and other thoughts   
Whereas your MO of continually dissing Linux, even when to do so you must talk out of a part of your anatomy which is almost, but not quite, exactly your oral implement is of course totally fair and unbiased. Oh, wait. No it isn't.
          (IT) Security Analyst - SIEM tools   

Rate: Market Day Rate   Location: London   

Security Analyst required for 10 week contract in Inner London to develop plans to safeguard computer files against accidental or unauthorised modification, destruction, or disclosure and to meet emergency data processing needs. Confer with users to discuss issues such as computer data access needs, security violations, and programming changes. Monitor current reports of computer viruses to determine when to update virus protection system. Modify computer security files to incorporate new software, correct errors, or change individual access status. Coordinate implementation of computer system plan with establishment personnel and outside vendors. Experience required: Experience working with Security Incident and Event Management (SIEM) tools Significant experience of system operational security, network and/or application security Technical knowledge in security engineering, system and network security, (authentication and security protocols, cryptography), operation of a PKI, and application security Knowledge of system security vulnerabilities and remediation techniques Analytic skills to understand security implications of technical events Extensive troubleshooting and research skills with a positive and proactive approach to customer service and getting things done Strong experience working in an operational role in a secure environment Knowledge of network and web related protocols (eg TCP/IP, UDP, IPSEC, HTTP, HTTPS, routing protocols) Strong Scripting skills in at least one of the following is highly desirable: Ruby, Python, Shell (bash, ksh, csh). Working knowledge of Linux Clearance Level Required: BPSS (DS) - Baseline Personnel Security Standard (with a Disclosure Scotland) IR35 Scope: Outside If deemed inside IR35 - Please note that the client has determined that the off-payroll working rules will apply to this assignment and where a worker elects to provide their services through an intermediary (such as a personal services company) then income tax and primary national insurance contributions will be deducted at source from any payments made to the intermediary. If deemed outside IR35 - Please note that the client has determined that the off-payroll working rules will not apply to this assignment. CV closing date is Friday 30th June at 3.00 pm. GSA Techsource Ltd operates as an Employment Agency when recruiting for permanent vacancies, and an Employment Business when recruiting for contract vacancies. All contract rates quoted are to Ltd companies.
 
Rate: Market Day Rate
Type: Contract
Location: London
Country: UK
Contact: Jackie Dean
Advertiser: GSA Techsource Ltd
Start Date: July 2017
Reference: JSJD/C/16866

          (IT) Security Developer - Cryptography   

Location: Northern Germany   

Security Developer - Cryptography Task: Work in German/Japanese projects with a focus on the development of security and safety functions for In Vehicle Infotainment systems Requirement engineering, architecture and design of the corresponding components Implementation in the team and protection function and quality up to product maturity Skills: Very good degree in (technical) computer science or electrical engineering Experience in embeded Linux, cryptography Professional experience in security, especially 'Secure Linux' necessary Very good C/C ++ programming skills, Self-employed and goal-oriented work, teamwork Very good knowledge of English. Willingness to international project work, travelling (Japan, India)
 
Type: Contract
Location: Northern Germany
Country: Germany
Contact: Simon Gould
Advertiser: Jet Consulting
Email: Simon.Gould.BB38B.0A016@apps.jobserve.com
Start Date: ASAP
Reference: JSSG_SECDEV

          Ingénieur logiciel - spécialiste Linux, réseaux et sécurité(H/F) - Thales - Cholet   
CE QUE NOUS POUVONS ACCOMPLIR ENSEMBLE : En nous rejoignant, vous vous verrez confier les missions suivantes : Vous contribuez à la définition de la stratégie de développement de nos produits, d'intégration et des moyens de tests pour les développements "Equipement" et logiciels ainsi qu'à la rédaction des plans associés. Vous définissez la conception de la solution, codez, développez ou configurez les nouveaux modules logiciels identifiés, les intégrez dans la solution complète et...
          Comment on Is $1000 Enough to Start Trading? by Joan   
Skype has established its online-based customer beta for the entire world, following introducing it broadly within the U.S. and You.K. before this calendar month. Skype for Online also now works with Linux and Chromebook for immediate messaging connection (no video and voice nevertheless, these need a connect-in set up). The increase from the beta contributes support for a longer listing of different languages to assist bolster that worldwide user friendliness
          Comment on MOVEMENT MONDAY: HALLOWEEN GAMES FOR KIDS! by Frederick   
Skype has launched its internet-structured buyer beta on the entire world, soon after introducing it largely in the United states and You.K. previous this 30 days. Skype for Web also now facilitates Chromebook and Linux for immediate text messaging connection (no voice and video but, these call for a plug-in installation). The expansion in the beta brings support for a longer set of dialects to aid strengthen that worldwide user friendliness
          Comment on FUN HALLOWEEN GAMES: GHOSTS IN THE GRAVEYARD SOCCER by Bernadine   
Skype has established its online-centered client beta for the world, after starting it generally inside the U.S. and You.K. earlier this calendar month. Skype for Internet also now can handle Linux and Chromebook for immediate online messaging connection (no voice and video but, these call for a connect-in installment). The expansion of your beta provides support for an extended set of spoken languages to assist reinforce that international functionality
          Useful Linux information and good reading for Linux users.   
Useful Linux e-books and further reading for Linux users Guide to the Linux command line: http://www.securitronlinux.com/lc/Linux-Command-Line-Guide%20.png. Useful guide to the Linux command line for Linux newcomers. Guide to Linux permissions. How to set the permissions for Linux files with the command line: http://www.securitronlinux.com/lc/112_sysadmin_permissions.pdf. Debian Linux handbook. Useful information and tips for Linux newcomers: http://www.securitronlinux.com/lc/debian-handbook.pdf. Ubuntu …


           ADMINISTRATEUR SYSTÈMES ET RÉSEAUX H/F - Alten - Toulouse   
Connaissances : Connaissances générales en système et hardwares (physiques ou virtuels) Connaissance approfondis des environnements AIX et Linux Connaissance générale du stockage SAN/NAS Connaissance générale de la virtualisation AIX/VmWar Connaissance du fonctionnement des outils de supervision (Patrol, BEM, BMC Portal ...) ainsi que des mécanismes et outils de sécurité Connaissance du fonctionnement des outils de sauvegarde (Netbackup, Avamar) Connaissance...
          Economist on "HFT bros, put down your latency arbs and explain this:"   

work hours are great, you learn about all the latest in chip design, fpgas, linux kernels, gpus; best technology, ; a ton of data to run ml models on; very low key ( no clients) ; everything is free;

only downside is that it is a shrinking business, the best years were in the past


          Linux Plumbers Conference: Containers Microconference accepted into Linux Plumbers Conference   

Following on from the Containers Microconference last year, we’re pleased to announce there will be a follow on at Plumbers in Los Angeles this year.

The agenda for this year will focus on unsolved issues and other problem areas in the Linux Kernel Container interfaces with the goal of allowing all container runtimes and orchestration systems to provide enhanced services.  Of particular interest is the unprivileged use of container APIs in which we can use both to enable self containerising applications as well as to deprivilege (make more secure) container orchestration systems.  In addition we will be discussing the potential addition of new namespaces: (LSM for per-container security modules; IMA for per-container integrity and appraisal, file capabilities to allow setcap binaries to run within unprivileged containers)

For more details on this, please see this microconference’s wiki page.

We hope to see you there!


          Senior Software Engineer - Cisco - San Jose, CA   
We Are Cisco. Strong background in Linux internals *. In depth understanding of virtualization technologies including Docker, KVM, VMWare, Why Cisco We connect...
From Cisco Systems - Mon, 05 Jun 2017 23:10:46 GMT - View all San Jose, CA jobs
          Popcorn Time   
Das ultimative kostenlose Programm von Filmen und Serien in Streaming, Popcorn Time, verfügt auch über einen Schreibtisch-Client für Linux-Distributionen
          linux-zen 4.11.8-1 x86_64   
The Linux-zen kernel and modules
          linux-zen-docs 4.11.8-1 x86_64   
Kernel hackers manual - HTML documentation that comes with the Linux-zen kernel
          linux-zen-headers 4.11.8-1 x86_64   
Header files and scripts for building modules for Linux-zen kernel
          linux-zen 4.11.8-1 i686   
The Linux-zen kernel and modules
          linux-zen-docs 4.11.8-1 i686   
Kernel hackers manual - HTML documentation that comes with the Linux-zen kernel
          linux-zen-headers 4.11.8-1 i686   
Header files and scripts for building modules for Linux-zen kernel
          Junior/Senior Drupal Software Developer - Acro Media Inc. - Okanagan, BC   
Some understanding of Linux hosting if at all possible. Hopefully, you have installed Linux on something at least once, even if it was just your Xbox, phone.....
From Acro Media Inc. - Wed, 12 Apr 2017 12:28:59 GMT - View all Okanagan, BC jobs
          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          Machine Learning Developer - Intel - Toronto, ON   
6 (+) months experience with developing software in Linux and/or Windows,. Develop Machine Learning solutions in form of libraries and accelerators for FPGA, SW...
From Intel - Fri, 30 Jun 2017 10:25:27 GMT - View all Toronto, ON jobs
          Embedded Electrical Engineer - Littelfuse - Saskatchewan   
Real Time Operating System (RTOS) experience such as FreeRTOS, MQX, or Embedded Linux. If you are motivated to succeed and can see yourself in this role, please...
From Littelfuse - Thu, 15 Jun 2017 23:12:35 GMT - View all Saskatchewan jobs
          INFRASTRUCTURE SPECIALIST - A.M. Fredericks - Ontario   
3 – 5 years managing Linux servers. Ability to build and configure servers from the ground up, both Linux & Windows....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:14 GMT - View all Ontario jobs
          DEVOPS PROGRAMMER - A.M. Fredericks - Ontario   
Proficient in LINUX (Python/BASH scripting/CRON). Our company looking to automate core functionality of the business....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:08 GMT - View all Ontario jobs
          Software engineer - Stratoscale - Ontario   
Highly proficient in a Linux environment. Our teams are spread around the globe but still work closely together in a fast-paced agile environment....
From Stratoscale - Thu, 15 Jun 2017 16:47:38 GMT - View all Ontario jobs
          Solution Architects - Managed Services - OnX Enterprise Solutions - Ontario   
Working knowledge of both Windows and Linux operating systems. OnX is a privately held company that is growing internationally....
From OnX Enterprise Solutions - Mon, 12 Jun 2017 20:52:05 GMT - View all Ontario jobs
          IT Lead, Server Support - Toronto Hydro - Ontario   
Knowledge of server and storage systems such as Red Hat Enterprise Linux, Windows Server 2008 and 2012 Operating Systems, Active Directory, Virtual Desktop...
From Toronto Hydro - Mon, 12 Jun 2017 19:52:14 GMT - View all Ontario jobs
          Analista Desarrollador Groovy, Ruby y JavaScript.   
Importante Empresa Requiere: Analista Desarrollador Groovy, Ruby y JavaScript. Experiencia en Groovy, Ruby y JavaScript. Conocimiento S.O. Linux, Excel Avanzado, Motor de Bases de Datos Postgres SQL y Procedimientos de Almacenado. Ingles Intermedio. Fu......
          Senior EPM Solutions Architect - AstralTech - Ontario   
Solid understanding of Linux and Microsoft Windows Server operating system. Senior EPM Solutions Architect....
From AstralTech - Sat, 13 May 2017 08:40:54 GMT - View all Ontario jobs
          DÉVELOPPEUR JAVA FULL-STACK (H/F) - ON-X - Ontario   
Système d'exploitation/ virtualisation ( Linux, Windows, VMware, Docker ). En qualité de Développeur Full-Stack , vous réaliserez les missions suivantes :....
From ON-X - Thu, 06 Apr 2017 07:41:27 GMT - View all Ontario jobs
          Développeur Java - Linux/Android (H/F) Expertise technique - ON-X - Ontario   
Linux et Android. Vous aurez pour mission le développement et la conception des applications Android....
From ON-X - Wed, 05 Apr 2017 07:39:29 GMT - View all Ontario jobs
          Développeurs - Intégrateurs techniques Java J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Vous serez en charge du développement de solutions innovantes d'authenfication forte....
From ON-X - Sun, 02 Apr 2017 07:26:34 GMT - View all Ontario jobs
          Développeur J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Dans le cadre de notre développement, nous sommes à la recherche d'un(e) Ingénieur Etudes et Développement JAVA J2EE....
From ON-X - Sun, 02 Apr 2017 07:26:32 GMT - View all Ontario jobs
          Product Verification Engineer - Evertz - Ontario   
Proficient with Linux and high level programming or scripting languages such as Python. We are expanding our Product Verification team and looking for...
From Evertz - Fri, 24 Mar 2017 05:42:40 GMT - View all Ontario jobs
          Customer Success Engineer - Stratoscale - Ontario   
Extensive experience troubleshooting remote Linux system issues. As a Customer Success Engineer, you will be providing technical assistance to Stratoscale...
From Stratoscale - Thu, 09 Mar 2017 21:32:53 GMT - View all Ontario jobs
          Typology of the Indian Fan in the context of the FIFA World Cup.    
I don't really follow soccer*. So I don't know much about soccer. But I follow a lot of Indian soccer fans. I view them with mild amusement mixed with scientific curiosity. I study them. I try to find patterns in their bizarrely enthusiastic behavior. And I love doing pop-socio and pop-psych analysis of their behavior and their attitudes towards a game where India ranks even lower than countries smaller than my apartment building in Pune.

So I present here a typology of the Indian Fan in the context of the upcoming World Cup. The typology is arranged according to which country they support or bet on to lift the cup.

BRAZIL: This one is a no-brainer really, so let's get it out of the way. Everyone loves Brazil. It's a country that has won the cup the most often. They always have some of the best players, a style of play which is considered exciting, and wear really eye-catching yellow-and-green colors. So even someone with or without the most rudimentary knowledge of the game feels comfortable saying "Brazil of course!" when someone asks "Which team do you support?" It's like picking the Patriots at the beginning of the NFL season.

But it goes beyond just how good the team is on average (here comes the pop-socio and pop-psych). Brazil is "nice". Brazil is "safe". Other than soccer, what is Brazil associated with? Carnivals, pretty people, beaches, being part of the fashionable BRIC block, and again, carnivals. If countries were brands, Brazil would be like Linux - not really that relevant to your life but easy to love. It's the kind of country that if you visit as an Indian, you expect to love.

So might as well support them. Plus they are the hosts this time. Western media is being mean to them just like they were mean to India in the run-up to the 2010 Commonwealth Games. They have messy social inequality issues just like us. Yes, Brazil is safe to support.

BEST 2 EUROPEAN UNION TEAMS: At any given time, at least 2 of the 3 objectively best teams in the world are from the EU. So the self-proclaimed "knowledgeable" soccer fan from India will be telling anyone willing to listen that one of those two teams is SURE to win. One of the two is always ALWAYS Germany. And the other is some country whose economy Germany has bailed out or will be bailing out soon. Right now it's Spain. A decade ago, it was France. In between, Italy took a break from electing Caligula-esque Prime Ministers to occupy that spot.

So Germany and "Another EU country" are the best bets for Indian soccer pundits who want to set themselves apart from the bandwagon Brazil supporters and maintain a chance for gloating when the dust settles.

ARGENTINA: Ah, Argentina! The most bizarre underdog-favorite combinations in the history of sport. I say this because some of my friends who support Argentina are genuinely convinced that Argentina is THE best team, regardless of the FIFA rankings. These verbose justifications start with "Messi is...." and then meander into incomprehensibility. Other friends supporting Argentina are steadfast about the team's underdog tag. "I always support an underdog, yaar!"

The media helps in whipping up support for Argentina too, given that Argentina is to FIFA what Notre Dame is to college football in America. Every damn year when the college football starts, there will be pundits in the media saying "OH NOTRE DAME HAS A FINE FINE TEAM THIS YEAR!". Most years, they stay in the rankings for three weeks before dropping out. Once in a couple of decades, in the vein of a stopped clock being right twice a day, Notre Dame will indeed have a great season. And then the pundits preen. And Hollywood makes an atrociously weepy movie starring a hobbit. But I ramble.    

So it goes with Argentina. Call it the continuing halo effect of bad boy Maradona. Or maybe the current halo effect of some guy named Messi who's done diddly-squat in two World Cups, but apparently does great in domestic soccer matches in Europe.

HOLLAND: The Dutch team is for true-blue underdog supporters. Again, I don't know much about soccer. But from what I am told, this team has choked more times than the South African cricket team, the 1990s Buffalo Bills, and Ivan Lendl combined. Which makes them particularly alluring for people who just love supporting an underdog in the faint hope that they will be proven right. Last time, these fans were rewarded by having to wait as long as the finals to have their hearts broken.

But still, for these folks, it is HUP HOLLAND HUP. A friend of mine says that the bright orange jerseys appeal to the latent Hindutva tendencies in some Indian fans, but we'll put a pin in that for now.

ANOTHER EU TEAM: The previous four categories take care of 90% of Indian soccer fans. Which brings us to self-proclaimed "knowledgeable" fans of the game who don't like being lumped with the conventional wisdom. They need to cogently claim that a different underdog is actually going to take home the cup, but the heathen masses are too blinded by media tropes to see it. So they pick a team which is ranked somewhere from 4th to 8th in the FIFA rankings and which has a player they have watched in one of the domestic soccer leagues from Europe.

"Of course it'll be Portugal yaar! That Cristiano Ronaldo I tell you...."

For the last 3 World Cups, the top choice for these people has been Portugal, thanks to this Ronaldo fella. Never mind that he and his team have shown the poor judgment of associating themselves with the New York Jets to practice for the World Cup. That anyone can think that a guy who voluntarily decided to learn something from Rex Ryan has any chance of winning anything shows how little soccer fans know about real football. But I troll. And I digress.

When it's not Portugal, it is some other European country that yes, Germany will also have to bail out.  

ENGLAND: Don't ask me why. Seriously don't. I know very little about soccer but even I know enough to know that the chances of England winning the cup are only negligibly higher than an Indian winning the Olympic 100 meter gold. And yet a few Indians will be steadfast in their support of England.

One guy I knew used to base his support on the supposedly "under-appreciated talents" of David Beckham. This was before Beckham became known as the guy who sells underwear on giant hoardings in Times Square. These days, I suspect the support for England is driven by the fact that so many Indians spend so much money on low quality made-in-Bangladesh jerseys of teams from the EPL, that they feel obliged to go all the way.

But seriously Indian supporters of England soccer, what are the chances that an England cricket team will win an ICC title AND a notionally English guy will win the Wimbledon AND England will win the FIFA World Cup, all within 5 years?

USA: The only Indians who support the US are a) Indians who live in the US, and b) follow only cricket and/or American sports. Yes, this includes me, ok? The rest of the time, we are happy with our WillowTV subscription, our NFL fantasy football leagues, out March madness brackets, our opinions on LeBron and the Heat, our love or hate for the Yankees or the Red Sox. We look at MLS ads and go "lulz". We see European soccer matches on our cable guide menu and go "WTF?".

But then once every four years, this damn World Cup thing comes along. And everyone is talking about it. Not just CNN, who will usually talk about the most vapid things. So what do we do?

USA! USA! USA!

We google furiously to find out who our players are. We try to figure out what the hell "Group of Death" means. We practice our pronunciation of Klinsmann. And we set a countdown clock to the start of the NFL season.

* "SOCCER? IT'S FOOTBALL BRO!!!" you say? Read this.  

          Docker 17.06 CE Debuts with Multi-Stage Container Builds   

At DockerCon 17 in April, Docker Inc made a series of large announcements, including a shift to a new model with the Moby Project to build the Docker Engine. Now in June, the first major release of Docker built with the Moby Project is available in the form of the Docker 17.06 Community Edition release.

The Moby Project is a refactoring of how Docker as a container platform is built, by breaking it down into aDocker series of community-focused efforts that includes LinuxKit and containerd among others.

Read more


          OSS Leftovers   
  • AT&T Passive Optical Network Trial to Test Open Source Software

    AT&T is set trial 10-gigabit symmetric passive optical network technology (XGS-PON), tapping its growing virtualization and software expertise to drive down the cost of next-generation PON deployments.

    The carrier said it plans to later this year conduct the XGS-PON trial as part of its plan to virtualize access functions within the last mile network. Testing is expected to show support for multi-gigabit per second Internet speeds and allow for merging of services onto a single network. Services to be supported include broadband and backhaul of wired and 5G wireless services.

  • Intel Begins Prepping Cannonlake Support For Coreboot

    The initial commit happening this morning that is mostly structuring for the Cannonlake SoC enablement and some boilerplate work while more of the enablement is still happening. Landing past that so far is the UART initialization while more Cannonlake code still has yet to land -- this is the first of any post-Kabylake code in Coreboot.

  • Windstream Formally Embraces Open Source

    Windstream is dipping its toe into the open source waters, joining the Open Network Automation Project (ONAP), its first active engagement in open source. (See Windstream Joins ONAP.)

    The announcement could be the sign of broader engagement by US service providers in the open source effort. At Light Reading's Big Communications Event in May, a CenturyLink speaker said his company is also looking closely at ONAP. (See Beyond MANO: The Long Walk to Network Automation.)

    Windstream has been informally monitoring multiple open source efforts and supporting the concept of open source for some time now, says Jeff Brown, director of product management and product marketing at Windstream. The move to more actively engage in orchestration through ONAP was driven by the growing influence of Windstream's IT department in its transition to software-defined networking, he notes.

  • Why Do Open Source Projects Fork?

    Open source software (OSS) projects start with the intention of creating technology that can be used for the greater good of the technical, or global, community. As a project grows and matures, it can reach a point where the goals of or perspectives on the project diverge. At times like this, project participants start thinking about a fork.

    Forking an OSS project often begins as an altruistic endeavor, where members of a community seek out a different path to improve upon the project. But the irony of it is that forking is kind of like the OSS equivalent of the third rail in the subway: You really don’t want to touch it if you can help it.

  • Mozilla Employee Denied Entry to the United States [iophk: "says a lot about new Sweden"]

    Daniel Stenberg, an employee at Mozilla and the author of the command-line tool curl, was not allowed to board his flight to the meeting from Sweden—despite the fact that he’d previously obtained a visa waiver allowing him to travel to the US.

    Stenberg was unable to check in for his flight, and was notified at the airport ticket counter that his entry to the US had been denied.

  • Print your own aquaponics garden with this open source urban farming system

    Aquapioneers has developed what it calls the world's first open source aquaponics kit in a bid to reconnect urban dwellers with the production of their food.

  • The story behind Kiwix, an offline content provider

    Kiwix powers offline Wikipedia and provides other content to users, like all of the Stack Exchange websites.

  • Systemd flaw leaves many Linux distros open to attack

          Events: openSUSE.Asia Summit 2017, Diversity Empowerment Summit North America, Technoshamanism in Aarhus, ELC Europe   
  • openSUSE.Asia Summit 2017 Tokyo, Japan

    It is our great pleasure to announce that openSUSE.Asia Summit 2017 will take place at the University of Electro Communications, Tokyo, Japan on October 21 and 22.

    openSUSE.Asia Summit is one of the great events for openSUSE community (i.e., both contributors, and users) in Asia. Those who usually communicate online can get together from all over the world, talk face to face, and have fun. Members of the community will share their most recent knowledge, experiences, and learn FLOSS technologies surrounding openSUSE.

    This event at Tokyo is the fourth in openSUSE.Asia Summit. Following the first Asia Summit in Beijing 2014, the Asia Summit has been held annually. The second summit was at Taipei in Taiwan, then in Yogyakarta in Indonesia last year. The past Asia Summits have had participants from China, Taiwan, India, Indonesia, Japan, and Germany.

  • Program Announced for The Linux Foundation Diversity Empowerment Summit North America

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the program for the Diversity Empowerment Summit North America, taking place September 14 in Los Angeles, CA as part of Open Source Summit North America. The goal of the summit is to help promote and facilitate an increase in diversity, inclusion, empowerment and social innovation in the open source community, and to provide a venue for discussion and collaboration.

  • Technoshamanism in Aarhus – rethink ancestrality and technology
  • Last Chance to Submit Your Talk for Open Source Summit and ELC Europe

          Blender – The Perfect 3D Creation Tool for Linux   

At this point in time, it can be argued that Blender needs no formal introduction, but for the sake of Linux users who are new to the community, I will introduce it anyway.

Blender is a 3D creation suite with support for the entire 3D pipeline – modeling, rigging, animation, simulation, rendering, compositing and motion tracking, video editing, and game creation.

As a free and open-source project, its code is contributed by hundreds of professionals, hobbyists, students, VFX experts, animators, professionals, scientists, and studios from all over the world.

It has become such a technological sensation that it is being used for various TV shows, short films, advertisements, and feature films now.

Read more


          Comment on Notes on mini-printers and Linux by scruss   
To use with a ______? Options 1, 2 & 3 will work with an Arduino (5V), or Raspberry Pi / 3V3 microcontroller <em>if</em> you can get the right power supply. 24 V is mostly for truck use, 12 V might be a little easier to get and still prints quickly, and the 5-9 V options are very easy to get but often drive the printer pretty slowly. Option 4 you likely don't want at all. Real RS232 levels are a bit rare these days. Option 5 would work nicely with a Raspberry Pi, but difficult with a microcontroller.
          Comment on Notes on mini-printers and Linux by Eric   
For the DP-EH600 which version do I need? "Option 1 24VDC,Serial(TTL) interface Option 2 12VDC,Serial(TTL) interface Option 3 5-9VDC,Serial(TTL) interface Option 4 5-9VDC,Serial(RS232)interface Option 5 5-9VDC,USB interface"
           John Toland 的書《佔領日本》(Occupation);The Rising Sun ;Adolf Hitler: The Definitive Biography 等等   






 我們探討日本,因為在過去50年,台灣和日本都是以製造業為主導的;我們談的這半世紀的「SQCTQC」等的學習,其實就是我們自己的故事。我們遭遇到共同的恩澤和困境,如何「轉危為安」開創新局才是我們的著眼處。

...第二次世界大戰後的日本即為絕佳的例子。廢止軍隊,天皇之象徵化,警察權力之弱化,財閥之解體,透過農地改革而使小地主成為自耕農,對工會的獎勵,政治參與的擴大以及驅逐曾經協助戰爭的政治人物、官僚、企業家、評論者等,均成為主要的改革...就此意義而言,國際關係是界定國家與社會關係的根本。」(豬口 孝《國家與社會》劉黎兒譯,台北:時報出版,1992,第85-6頁。)
要了解19451949的日本情景,美軍為日本立憲,使其成為自由(某村落老師的解釋是「自主」「行動」)民主,軍法審判、兒女深情、愛恨情仇,美國決定將日本發展成亞洲的「工場大本營」(這使得財閥企業和資本主義結合)等等重要的社會背景,可參考小說: John Toland 《佔領日本》(Occupation)北京:中國社會科學出版社,【19871997
關於這一時期日本宏觀的經濟發展的一些基本認識,參考:
中村隆英編的《日本經濟史7--"計畫化""民主化"》北京:三聯書店,1997
安場保吉等編《日本經濟史8---高速增長》北京:三聯書店,1997



John Willard Toland (June 29, 1912 – January 4, 2004)[1] was an American writer and historian. He is best known for a biography ofAdolf Hitler[2] and a Pulitzer Prize-winning history of World War II-era Japan, The Rising Sun.

Books[edit]



-----


----

Pulitzer Prize-winning historian John Willard Toland was born in La Crosse, Wisconsin on this day in 1912.
"A hybrid of Prometheus and Lucifer.”
--from ADOLF HITLER: THE DEFINITIVE BIOGRAPHY by John Toland
Toland's classic, definitive biography of Adolf Hitler remains the most thorough, readable, accessible, and, as much as possible, objective account of the life of a man whose evil effect on the world in the twentieth century will always be felt. Toland’s research provided one of the final opportunities for a historian to conduct personal interviews with over two hundred individuals intimately associated with Hitler. At a certain distance yet still with access to many of the people who enabled and who opposed the führer and his Third Reich, Toland strove to treat this life as if Hitler lived and died a hundred years before instead of within his own memory.


沒有自動替代文字。



"He was learning how to appeal to the basic needs of the average German ... His 'basic values and aims' were as reassuring as they were acceptable. His listeners could not possibly know that the 'reasonable' words were a mask for one of the most radical programs in the history of mankind, a program that would alter the map of Europe and affect the lives, in one way or another, of most of the people on Earth."
John Toland, Adolf Hitler
Pulitzer Prize-winning historian John Toland’s classic, definitive biography of Adolf Hitler remains the most thorough, readable, accessible, and, as much as possible, objective account of the life of a man whose evil effect on the world in the twentieth century will always be felt.
------

Image result for rising sun toland



The Rising Sun
Book by John Toland



The Rising Sun: The Decline and Fall of the Japanese Empire, 1936–1945, written by John Toland, was published by Random House in 1970 and won the 1971 Pulitzer Prize for General Non-Fiction. It was republished by Random House in 2003.Wikipedia
GenreNon-fiction



原書名是『太陽旗的升起』。大陸譯成『大日本帝國的衰亡』。台灣重新出版『帝國落日』。
網路上根本免費就可以下載。
約翰托蘭的此書,得力于妻子松村壽子的協助,曾獲普立茲獎。

          Terminus is modern, highly configurable terminal app for Windows, Mac and Linux   

Hands up if use GNOME Terminal as your default terminal on Ubuntu? That’s a lot of hands. GNOME Terminal is great. It’s fast, featured, and straightforward. But it doesn’t hurt to try a few alternatives to it from time to time. Be it the vintage chic of retro term or the modern minimalism of Hyper. Today we […]

This post, Terminus is modern, highly configurable terminal app for Windows, Mac and Linux, was written by Joey Sneddon and first appeared on OMG! Ubuntu!.


          PhockUp is a Clever CLI Tool To Organize Photos by Date   

linux photography appsPhockup is a simple, straightforward, command line tool for sorting photos into folders based on date. It's an ideal tool for making organized backups.

This post, PhockUp is a Clever CLI Tool To Organize Photos by Date, was written by Joey Sneddon and first appeared on OMG! Ubuntu!.


          Re: Configure HHTTPS Tomcat to Remedy   

Discussion successfully moved from Developer Community to Remedy AR System

 

Did you modify the "server.xml" from Tomcat? Do you have an Apache httpd or IIS server or is Tomcat "alone"?

Is Tomcat on Windows or Linux?


          Instalacia softweru pod Debian. Rozbehanie internetu a tlaciarne.   
Na pocitaci po reinstalacii linuxu Debian by som potreboval nainstalovat softwer, rozbehat internet a tlaciaren. Jedna sa o pocitac v BA, Ruzinov. Ak ...
          Comentario en Nvidia actualiza sus drivers Vulkan para Linux por piranin   
Buena noticia para la gente que tenga Nvidia aunque para mi no es que me importe mucho pq tengo todo amd (ryzen 5 y una 580) y son los que espero que tb hagan como Nvidia y actualicen bien los drivers de linux pq en comparacion a Nvidia aun les falta y bastante
          Comentario en Linux Lite 3.4 ya está disponible para equipos con pocos recursos por Sento   
Acabo de instalar la distro, y no hay manera de que se quede instalado el español. He ido a Setting, y he seleccionado el idioma español. Reiniciado y sigue todo en ingles. Cuando ya he borrado el idioma ingles.
          Comentario en Las mejores distribuciones Linux de 2017 por Fede   
Deepin una distro segura? Viniendo de un país comunista que espía y controla a sus habitantes a mas no poder mandando a prisión a los que opinan de manera contraria a su régimen y traficando con sus órganos? No me lo creo, la instalé para probarla. Pero la eliminé luego. Y pensar que la gente cree que es un SO libre, sin analizar el contexto en ele que se desarrolla (país). No pretendo entrar en debates políticos, pero esta custión de la libertad de GNU Linux no se condice con la política de China http://cnnespanol.cnn.com/2016/06/23/reporte-china-continua-con-la-extraccion-de-organos-a-presos-a-escala-masiva/
          Comment on LibreOffice 4.0 Stable Version Available for Download by Patrick Koppenburg   
in 32 bit: cd LibreOffice_4.0.0.3_Linux_x86_deb/DEBS/ and not cd LibreOffice_4.0.0.3_Linux_x86-64_deb/DEBS/
          Comentario en Como hacer un USB Multibooteable con RMPrepUSB, Easy2Boot y Yumi Linux por furuikisui   
en Yumi hay 4 opciones para instalar windows, cual usar para tener varios sistemas?
          From the Just Getting Through Last weeks Stuff Dept: Microsoft / Novell Deal : IT wins due to InterOp!!!!! YAY!   

Originally posted on: http://teamfoundation.net/archive/2006/11/13/96997.aspx

Wow. I get buried for a week and get transported into a parallel universe. Microsoft and Novell make an historic agreement. And while some folks in the open source community aren't happy, it seems most (including me) think this is a pretty good deal for building software in general... I mean, being able to just these three things:
  • "...Microsoft and Novell will jointly develop a compelling virtualization offering for Linux and Windows..."
- Right on!
  • "...make it easier for customers to federate Microsoft Active Directory with Novell eDirectory"
- This has been a pain...
  • "...will take steps to make translators available to improve interoperability between Open XML and OpenDocument formats"
Nice! Guys, in the words of the great philosopher and sage, Rodney King "Can't we all just get along?"... I think that this agreement moves all of us who build software one step closer to doing just that.
          LinuxWorld Expo Site (and MacWorld Expo too) - Powered by Windows Server 2003...    

Originally posted on: http://teamfoundation.net/archive/2004/08/04/9326.aspx

Recently posted on Netcraft ... See for yourself.  BWAHAHAHAHAHA

'nuff said... ;)


          Comment on Model/Actress/Writer Natalia Bonifacci Interviews Herself by Alva   
Skype has opened its online-dependent buyer beta towards the world, after starting it extensively within the U.S. and You.K. previous this calendar month. Skype for Online also now supports Linux and Chromebook for immediate messaging communication (no voice and video however, individuals call for a plug-in installation). The increase of the beta contributes help for a longer set of dialects to help you strengthen that overseas functionality
          ПриватБанк предложил государству безопасные системы на базе PrivatLinux   
ПриватБанк готов в кратчайшие сроки предоставить органам государственной власти, госпредприятиям и органам местной власти готовое решение корпоративных операционных систем на базе PrivatLinux.
          Building A Linux Media Extender for your Media Center with GeeXbox   


I recently took up a challenge to build a media center PC for a friend. Having scoured Ebay I found what I thought looked like a really nice case for not too much money. It was also a Barebones system coming with DVD, Floppy and Motherboard. The motherboard only supported a Pentium III up to 800mhz but I decided that I could always upgrade this.

When I received it the case lived up to expectations and was pretty small. So small that on opening it up I found my upgrade options were limited. The case comes with a BookPC BKI810 3.3 motherboard (really tiny and would make a great basis for a Car PC) This is a custom BabyATX design based on the Intel 810 chipset and supports Pentium III processors up to 800mhz with a 100mhz front side bus. There are no PCI expansion ports but it does have on-board video (with TV-out), Audio (with SP-Dif out), Ethernet, Modem, COm, Game Port, Printer and USB (1.0) ports.

Whilst this would have run MCE 2005 it wasn't going to cut the mustard for my friends Hi-def vista media center so I went to plan B for that (more on that in a later post). I guess I could have gone with upgrading the BookPC a Pico-ITX Motherboard buy they are a bit too pricey.

All was not lost with this case however, I had been meaning to look into the prospect of building a Media Extender to work with the uPNP capabilities of Windows Media Player 11 in vista.

Note I say Media Extender - not Media center Extender - I wasn't intending to stream Live TV or even watch Recorded TV in native DVRMS format but I already transcode my Recorded TV to WMV files and Media Player 11 can share them via its built in uPNP media server.

uPNP stands for Universal Plug and Play a set of standards for Network devices to talk to each other. Windows Media Player 11 implements what used to be called Windows Media Connect and is a uPNP AV server. Basically this can be contacted by any uPNP AV client to access any Music, Pictures or Video in your Media Player Library.

I added a 800Mhz processor, hetsink, fan, 512mb Ram and a 20Gb hards drive to the motherboard. Checked it all out and it booted to BIOS fine. Got as bit of a shock when my 800mhz procssor was shown as 600mhz but then realised that the processor was 133mhz FSB so on a 100Mhz FSB board it scaled down to 600mhz. Not a problem it should be plenty fast enough for what I wanted.

So all I needed was a uPNP client. I also wanted something that could use the inbuilt DVD player to play audio/dvds.

I decided to keep costs down and experiment with using Linux. I heard of Myth so tried that first. To cut things short I'm not a Linux guru and I struggled with Myth. I first tried KnopMyth which I just couldn't get to play CD's or DVD's. I then found MythDora which was a much more straightforward install and did play CD and DVD but I couldn't work out how to configure it for uPNP - if even that was possible.

I was about to give up on the Linux route when I made a great discovery GeeXBox. This is a Live CD(basically Linux which boots from a CD and doen't need a hard disk) designed for playing Media and it has a uPNP client.





I downloaded the CD image (it comes in .iso format) and burnt it to CD. Put it in the HTPC Dvd Drive booted and, after a few seconds of Linux boot messages) up popped the menu including an open option. Selecting this gave me a uPNP option and selecting this gave me my media center as a uPNP server. I was stuck here for a few seconds as it wouldn't display a list of contents form my Media Center, until I realised, stupidly, that I hadn't gome to my media center pc and allowed the new device to access. (In Vista this is really easy as a toolbox popup appears on the PC as soon as a new device is detected)

Hey presto I had access to all the Music, Photos and Videos on my media center. I choose a Video (which happened to be a divx avi) and it played instantly and smoothly. This was great but alas my next choice a WMV file failed to play.

Codecs! I thought as one does (quick aside in the Media Center world I wonder if the word Codecs should now be added to swear word filters)

Scouring the excellent GeeXBox web site I found I was right the WMV codecs were not included by default and I would have to build my own custom ISO distribution. That sounded horrible - I really didn't want to get into Linux toolchains. Fortunately GeeXBox had that covered and supply a very user friendly ISO builder. It even went as far as downloading the codecs for me. Excellent. One more button press and it built me a new Iso and a quick burn later I had a functioning GeexBox streaming video from my Media Center. It even managed a WMV HD file albeit a bit broken up.

Streaming Audio was just as easy and my new custom build also gave me shoutcast radio. To top things off DVD's play well as do CD audio (although I do have a problem with a couple stuttering on the first track)

So in summary for around 100 pounds you can pick up all the components for a decent media streaming extender and if you do not fancy building it yourself the kit I used here, with the configure GeeXbox disk, will be up on eBay shortly. I'll post the link here.

EDIT: Its up on ebay CLICK HERE
          RevisionFX Collection June 2017 Win/Mac/Lnx   

Includes: RevisionFX.FieldsKit.v3.4.2-AMPED RevisionFX.REFill.v2.2.2-AMPED RevisionFX.Shade.Shape.v4.2.3-AMPED RevisionFX.SmoothKit.v3.3.4-AMPED RevisionFX.ReelSmart.Motion.Blur.for.FX.v5.1.5-AMPED RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7-AMPED RevisionFX.Twixtor.for.OFX.v6.2.8-AMPED RevisionFX.DENoise.for.OFX.v3.0.9-AMPED RevisionFX.DENoise.for.OFX.v3.0.9.LINUX-AMPED RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.LINUX-AMPED RevisionFX.Twixtor.for.OFX.v6.2.8.LINUX-AMPED RevisionFX.DENoise.for.OFX.v3.0.9.MACOSX-AMPED RevisionFX.FieldsKit.v3.4.2.MACOSX-AMPED RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.MACOSX-AMPED RevisionFX.REFill.v2.2.2.MACOSX-AMPED RevisionFX.Shade.Shape.v4.2.3.MACOSX-AMPED RevisionFX.SmoothKit.v3.3.4.MACOSX-AMPED RevisionFX.Twixtor.for.OFX.v6.2.8.MACOSX-AMPED     Download Links:-   RevisionFX.DENoise.for.OFX.v3.0.9-AMPED.rar RevisionFX.DENoise.for.OFX.v3.0.9.LINUX-AMPED.rar RevisionFX.DENoise.for.OFX.v3.0.9.MACOSX-AMPED.rar RevisionFX.FieldsKit.v3.4.2-AMPED.rar RevisionFX.FieldsKit.v3.4.2.MACOSX-AMPED.rar RevisionFX.ReelSmart.Motion.Blur.for.FX.v5.1.5-AMPED.rar RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7-AMPED.rar RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.LINUX-AMPED.rar RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.MACOSX-AMPED.rar RevisionFX.REFill.v2.2.2-AMPED.rar RevisionFX.REFill.v2.2.2.MACOSX-AMPED.rar RevisionFX.Shade.Shape.v4.2.3-AMPED.rar RevisionFX.Shade.Shape.v4.2.3.MACOSX-AMPED.rar RevisionFX.SmoothKit.v3.3.4-AMPED.rar RevisionFX.SmoothKit.v3.3.4.MACOSX-AMPED.rar RevisionFX.Twixtor.for.OFX.v6.2.8-AMPED.rar RevisionFX.Twixtor.for.OFX.v6.2.8.LINUX-AMPED.rar RevisionFX.Twixtor.for.OFX.v6.2.8.MACOSX-AMPED.rar   Mirror :- http://nitroflare.com/view/A02D8417895AB06/RevisionFX.DENoise.for.OFX.v3.0.9-AMPED.rar http://nitroflare.com/view/F6AFCCAB5F56F1C/RevisionFX.DENoise.for.OFX.v3.0.9.LINUX-AMPED.rar http://nitroflare.com/view/B83394116C8BB5F/RevisionFX.DENoise.for.OFX.v3.0.9.MACOSX-AMPED.rar http://nitroflare.com/view/B0C79C2A9442AA4/RevisionFX.FieldsKit.v3.4.2-AMPED.rar http://nitroflare.com/view/33131AFCFFB62D1/RevisionFX.FieldsKit.v3.4.2.MACOSX-AMPED.rar http://nitroflare.com/view/150863F57C6C585/RevisionFX.ReelSmart.Motion.Blur.for.FX.v5.1.5-AMPED.rar http://nitroflare.com/view/A86CFBF287B7CEC/RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7-AMPED.rar http://nitroflare.com/view/98FF46549BACC5E/RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.LINUX-AMPED.rar http://nitroflare.com/view/B6CD5E9DB9DA2CC/RevisionFX.ReelSmart.Motion.Blur.for.OFX.v5.2.7.MACOSX-AMPED.rar http://nitroflare.com/view/4B8BE9F0461E4FB/RevisionFX.REFill.v2.2.2-AMPED.rar […]

The post RevisionFX Collection June 2017 Win/Mac/Lnx appeared first on GFXDomain Blog.


          ProtonVPN: vyzkoušeli jsme anonymní a rychlou VPN   
S tím, jak se omezuje internetová svoboda, se stále častěji skloňuje pojem VPN. Služeb, které vás dokážou ochránit před slídily. Novinka od autorů ProtonMailu se i přes horší podporu Linuxu povedla. VPN služeb je na trhu spousta. Slouží k tomu, aby ochránili váš internetový provoz před zvědavci, zejména někdy pochybnými poskytovateli připojení. Krom toho také umožní obejití geoblokace různého obsahu tak a mnoho dalšího. Problém těchto VPN služeb je v tom, že ať je technologie jakkoliv moderní nebo supersilně šifrovaná, nikdy se to neobejde bez důvěry – musíte věřit, že je provozovatel VPN čestný.
          Kaiana   
Web site: kaiana.com.br (not active) Origin: Brazil Category: Desktop Desktop environment: KDE Architecture: x86, x86_64 Based on: Kubuntu Wikipedia: Media: Live DVD The last version | Released: 12.04 | July 28, 2012 Kaiana (previously: Big Linux) – an operating system
          (IT) Full Stack Developer   

Rate: £350 - £450 per Day   Location: Glasgow, Scotland   

Full Stack Developer - 12 month contract - Glasgow City Centre One of Harvey Nash's leading FS clients is looking for an experienced full stack developer with an aptitude for general infrastructure knowledge. This will be an initial 12 month contract however the likelihood of extension is high. The successful candidate will be responsible for creating strategic solutions across a broad technology footprint. Experience within financial services would be advantageous, although not a prerequisite. Skill Set: - Previous Experience full-stack development experience with C#/C++/Java, Visual Studio, .Net, Windows/Linux web development - Understanding of secure code development/analysis - In-depth knowledge of how software works - Development using SQL and Relational Databases (eg SQL, DB2, Sybase, Oracle, MQ) - Windows Automation and Scripting (PowerShell, WMI) - Familiarity with common operating systems and entitlement models (Windows, Redhat Linux/Solaris) - Understanding of network architecture within an enterprise environment (eg Firewalls, Load Balancers) - Experience of developing in a structured Deployment Environment (DEV/QA/UAT/PROD) - Familiarity with the Software Development Life Cycle (SDLC) - Experience with Source Control and CI systems (eg GIT, Perforce, Jenkins) - Experience with Unit and Load testing tools - Experience with Code Review products (eg Crucible, FishEye) - Excellent communication/presentation skills and experience working with distributed teams - Candidates should demonstrate a strong ability to create technical, architectural and design documentationDesired Skills - Any experience creating (or working with) a "developer desktop" (dedicated desktop environment for developers) - Experience of the Linux development environment - An interest in cyber security - Knowledge of Defense in Depth computing principles - Experience with security products and technologies(eg Cyberark, PKI) - Systems management, user configuration and technology deployments across large, distributed environments (eg Chef, Zookeeper) - Understanding of core Windows Infrastructure technologies (eg Active Directory, GPO, CIFS, DFS, NFS) - Monitoring Tools (eg Scom, Netcool, WatchTower) - Experience with Apache/Tomcat-web server "Virtualisation" - Design patterns and best practices - Agile development: Planning, Retrospectives etc. To apply for this role or to discuss it in more detail then please call me and send a copy of your latest CV.
 
Rate: £350 - £450 per Day
Type: Contract
Location: Glasgow, Scotland
Country: UK
Contact: Cameron MacGrain
Advertiser: Harvey Nash Plc
Start Date: ASAP
Reference: JS-329601/001

          (IT) Java Tech Specialist/Developer/Engineer   

Location: Belfast, Northern Ireland   

Java Tech Specialist/Developer/Engineer This is a challenging and exciting opportunity to work within a leading banking environment and work closely with a Front Office business unit. The quality trading is now enhancing our leading position in the financial market. The successful candidate will play a key role in that success and be part of a large group of high calibre developers. The successful candidate must be able to operate with a high level of self-motivation and produce results with a quick turn around on key deliverable's. Key Responsibilities: Requirements analysis and capture, working closely with the business users and other technology teams to define solutions. Development of the Electronic Execution applications and components, as part of a global development effort. Liaison with support and development teams. Third line support of the platform during trading hours. Applying Equities financial product knowledge to the full development life cycle. Defining and evolving architectural standards; promoting adherence to these standards. Technical mentoring of junior team members. Knowledge and Experience: Strong Java Background Unix/Linux OO programming Good relational database experience Working SQL knowledge We have both permanent and contract positions available so please do not hesitiate to apply If you apply for this role, our thanks for your interest. However, due to high level of applicants expected we are unable to respond to every one. Therefore, if you have not heard from Eurobase People within 5 working days, then, unfortunately, your application has been unsuccessful. Eurobase People are acting as an Employment Agency.
 
Type: Unspecified
Location: Belfast, Northern Ireland
Country: UK
Contact: Adam Cohen
Advertiser: Eurobase People
Email: Adam.Cohen.F5223.39364@apps.jobserve.com
Start Date: ASAP
Reference: JS-TS

          (IT) Senior Java Engineer - Dublin   

Rate: Euros 500 +   Location: Dublin   

Financial Services Powerhouse client of mine is looking to add 2 Senior Engineers to their team to work on a large scaling platform which has 20,000 users. They are looking to implement and add new tools to their platform to ensure that their business and trading platforms are up to the level that enables their business to grow 3 fold this year. Paying excellent daily rates and chance to joining a team in which is at the forefront of my clients innovation centre. What they are looking for: -At least 5 years of experience in delivery of innovative software solutions (Java) -Experience migrating enterprise grade solutions to cloud offerings -Solid Application development background with a focus on Java, in an enterprise environment -Experience setting up large scale CI/CD pipelines -Familiarity with software development tools such as: JIRA, Confluence, SVN, Artifactory and others -Experience with continuous integration Servers (TeamCity, Jenkins) -Experience with virtualization solutions (vRealize) -Experience working with containers (Docker) -Ability to partner with senior stakeholders both in the team and across teams -Dedicated self-starter, ability to drive a team and its contribution Skills that would be advantageous to have: -Maven/Gradle/MSBuild/cmake -Apache/Tomcat skills -Grails/Groovy -Track record on Linux Java application debugging and tuning -Work experience in large multinational corporations -Familiarity with agile methodologies Contact Brendan (see below)
 
Rate: Euros 500 +
Type: Contract
Location: Dublin
Country: Ireland
Contact: Brendan Hennessy
Advertiser: Stelfox Ltd
Email: Brendan.Hennessy.3712A.B45A6@apps.jobserve.com
Start Date: ASAP
Reference: JS

          Linux Administrator - Alteo Inc. IT Recruiting Services - Montréal, QC   
Alteo is looking fo a Linux Administrator for a permanent job based in Montreal. Installs, builds and manages virtual &amp; physical servers, networks &amp; firewalls,...
From Alteo Inc. IT Recruiting Services - Wed, 21 Jun 2017 20:58:20 GMT - View all Montréal, QC jobs
          Administrateur Linux - Alteo Recrutement Informatique - Montréal, QC   
Alteo est à la recherche d'un Administrateur Linux pour un emploi permanent basé à Montréal. Déployer, maintenir et gérer des serveurs virtuels et physiques,...
From Alteo Recrutement Informatique - Wed, 21 Jun 2017 20:37:02 GMT - View all Montréal, QC jobs
          Первый выпуск сетевой библиотеки HumbleNet, поддерживающей работу в браузере   
Разработчики из сообщества Mozilla представили первый релиз проекта HumbleNet, в рамках которого развивается кроссплатформенная сетевая библиотека, а также необходимые для её работы серверные компоненты (peer-server). Библиотека предоставляет простой C API для создания сетевых приложений, но для обработки сетевых соединений использует протоколы WebRTC и WebSockets, что позволяет применять её не только на традиционных системах, таких как Windows, macOS и Linux, но и в web-браузере с задействованием Asm.js и WebAssembly. Код библиотеки написан на языке С++ (для компиляции в Asm.js и WebAssembly при меняется Emscripten) и поставляется под лицензией BSD.
          Выпуск cистемы управления контейнерной виртуализацией Docker 17.06   
Представлен релиз инструментария для управления изолированными Linux-контейнерами Docker 17.06, предоставляющего высокоуровневый API для манипуляции контейнерами на уровне изоляции отдельных приложений. Docker позволяет, не заботясь о формировании начинки контейнера, запускать произвольные процессы в режиме изоляции и затем переносить и клонировать сформированные для данных процессов контейнеры на другие серверы, беря на себя всю работу по созданию, обслуживанию и сопровождению контейнеров. Инструментарий базируется на применении встроенных в ядро Linux штатных механизмов изоляции на основе пространств имён (namespaces) и групп управления (cgroups). Код Docker написан на языке Go и распространяется под лицензией Apache 2.0.
          Компания System76 анонсировала новый Linux-дистрибутив Pop!_OS   
Компания System76, специализирующаяся на производстве ноутбуков, ПК и серверов, поставляемых с Linux, представила новый дистрибутив Linux Pop!_OS, который будет поставляться на оборудовании System76 вместо ранее предлагаемого дистрибутива Ubuntu. При этом Pop!_OS продолжает формироваться на пакетной базе Ubuntu, но отличается переработанным окружением рабочего стола и иной целевой аудиторией.
          Joining the press – Which topic would you like to read about?   
When I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought “Hey, why shouldn’t I write for them?”. Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos,
          Writing a Voice Activated SharePoint Todo List - IoT App on RPi   

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/05/16/writing-a-voice-activated-sharepoint-todo-list---iot-app.aspx

Ever wanted to write a voice activated system on an IoT device to keep track of your “todo list”, hear your commands being played back, and have the system send you a text message with your todo list when it’s time to walk out the door?  Well, I did. In this blog post, I will provide a high level overview of the technologies I used, why I used them, a few things I learned along the way, and partial code to assist with your learning curve if you decide to jump on this.  I also had the pleasure of demonstrating this prototype at Microsoft’s Community Connections in Atlanta in front of my colleagues.

How It Works

I wanted to build a system using 2 Raspberry Pis (one running Windows 10 IoT Core, and another running Raspbian) that achieved the following objectives:

  • * Have 2 RPis that communicate through the Azure Service Bus
    This was an objective of mine, not necessarily a requirement; the intent was to have two RPis running different Operating Systems communicate asynchronously without sharing the same network
  • * Learn about the Microsoft Speech Recognition SDK
    I didn’t want to send data to the cloud for speech recognition; so I needed an SDK on the RPi to perform this function; I chose the Microsoft Speech Recognition SDK for this purpose

    * Communicate to multiple cloud services without any SDK so that I could program the same way on Windows and Raspbian (Twilio, Azure Bus, Azure Table, SharePoint Online)
    I also wanted to minimize the learning curve of finding which SDK could run on a Windows 10 IoT Core, and Raspbian (Linux); so I used Enzo Unified to abstract the APIs and instead send simple HTTPS commands allowing me to have an SDK-less development environment (except for the Speech Recognition SDK). Seriously… go find an SDK for SharePoint Online for Raspbian and UWP (Windows 10 IoT Core).

The overall solution looks like this:

image

Technologies

In order to achieve the above objectives, I used the following bill of materials:

Technology Comment Link
2x Raspberry Pi 2 Model B Note that one RPi runs on Windows 10 IoT Core, and the other runs Raspbian http://amzn.to/2qnM6w7
Microphone I tried a few, but the best one I found for this project was the Mini AKIRO USB Microphone http://amzn.to/2pGbBtP
Speaker I also tried a few, and while there is a problem with this speaker on RPi and Windows, the Logitech Z50 was the better one http://amzn.to/2qrNkop
USB Keyboard I needed a simple way to have keyboard and mouse during while traveling, so I picked up the iPazzPort Mini Keyboard; awesome… http://amzn.to/2rm0FOh
Monitor You can use an existing monitor, but I also used the portable ATian 7 inch display. A bit small, but does the job. http://amzn.to/2pQ5She 
IoT Dashboard Utility that allows you to manage your RPis running Windows; make absolutely sure you run the latest build; it should automatically upgrade, but mine didn’t. http://bit.ly/2rmCWOU
Windows 10 IoT Core The Microsoft O/S used on one of the RPis; Use the latest build; mine was 15063; if you are looking for instructions on how to install Windows from a command prompt, the link provided proved useful  http://bit.ly/2pG9gik
Raspbian Your RPi may be delivered with an SD card preloaded with the necessary utilities to install Raspbian; connecting to a wired network makes the installation a breeze. http://bit.ly/2rbnp7u
Visual Studio 2015 I used VS2015, C#, to build the prototype for the Windows 10 IoT Core RPi http://bit.ly/2e6ZGj5
Python 3 On the Raspbian RPi, I used Python 3 to code. http://bit.ly/1L2Ubdb
Enzo Unified I installed and configured an Enzo Unified instance (version 1.7) in the Azure cloud; for Enzo to talk to SharePoint Online, Twilio, Azure Service Bus and Azure Storage, I also needed accounts with these providers. You can try Enzo Unified for free for 30 days. http://bit.ly/2rm4ymt

 

Things to Know

Creating a prototype involving the above technologies will inevitably lead you to collect a few nuggets along the way. Here are a few.

Disable Windows 10 IoT Core Updates

While disabling updates is generally speaking not recommended, IoT projects usually require a predictable environment that does not reboot in the middle of a presentation. In order to disable Windows Updates on this O/S I used information published Mike Branstein on his blog: http://bit.ly/2rcOXt9

Try different hardware, and keep your receipts…

I had to try a few different components to find the right ones; the normally recommended S-150 USB Logitech speakers did not work for me; I lost all my USB ports and network connectivity as soon as I plugged it in. Neither did the JLab USB Laptop speakers. I also tried the 7.1 Channel USB External Sound Card but was unable to make it work (others were successful). For audio input, I also tried the VAlinks Mini Flexible USB microphone; while it worked well, it picked up too much noise compared to the AKIRO, and became almost unusable in a room with 20 people where you have background noise.

Hotel WiFi Connections

This was one of the most frustrating part of this whole experience on Windows 10 IoT Core. You should know that this operating system does not currently come equipped with a browser. This means that you cannot easily connect to a hotel network since this usually requires starting a browser so that you can enter a user id and password provided by the hotel. Further more, since there is also no possible way to “forget” a previously registered network, you can find yourself in a serious bind… I first purchased the Skyroam Mobile Hotspot, hoping it would provide the answer. Unfortunately the only time I tried it, in Tampa Florida, the device could not obtain a connection. So I ended up adding a browser object into my UWP application and force it to refresh a specific image every time I start the app; this will force the hotel login page to show up when needed. I am still looking for a good solution to this problem.

Speech Privacy Policy on Windows

Because parts of the code I am running leverages the underlying APIs of Cortana, it seems that you must accept the Cortana privacy policy; this is required only the first time you run the application, but is obviously a major nightmare for applications you may want to ship. I am not aware of any programmatic workaround at this time. This stackoverflow post provides information about this policy and how to accept it.

How It Looks Like

A picture is worth a thousand words… so here is the complete setup:

20170502_225941

C# Code

Since this is an ongoing prototype I will not share the complete code at this time; however I will share a few key components/techniques I used to make this work.

Speech Recognition

I used both continuous dictation speech recognition, and grammar-based recognition from the Microsoft Speech Recognition API. The difference is that the first one gives you the ability to listen to “anything” being said, and the other will only give you a set of results that match the expected grammar. Both methods give you a degree of confidence so you can decide if the command/text input was sufficiently clear. The following class provides a mechanism for detecting input either through continuous dictation or using a grammar file. The timeout ensures that you do not wait forever. This code also returns the confidence level of the capture.

 

using Enzo.UWP;
using System;
using System.Collections.Generic;

using System.Diagnostics;
using System.Net.Http;
using System.Threading.Tasks;
using Windows.ApplicationModel;
using Windows.Devices.Gpio;
using Windows.Media.SpeechRecognition;
using Windows.Media.SpeechSynthesis;
using Windows.Storage;

namespace ClientIoT
{

    public class VoiceResponse
    {
        public string Response = null;
        public double RawConfidence = 0;
    }

    public class VoiceInput
    {
        private const int SPEECH_TIMEOUT = 3;
        private System.Threading.Timer verifyStatus;
        private string lastInput = "";
        private double lastRawConfidence = 0;
        private bool completed = false;
        private bool success = false;

        public async Task<VoiceResponse> WaitForText(string grammarFile)
        {
            return await WaitForText(SPEECH_TIMEOUT, grammarFile);
        }

        public async Task<VoiceResponse> WaitForText(int timeout = SPEECH_TIMEOUT, string grammarFile = null)
        {
            var resp = new VoiceResponse();
            try
            {
                success = false;
                completed = false;
                lastInput = "";
                lastRawConfidence = 0;

                SpeechRecognizer recognizerInput;
                DateTime dateNow = DateTime.UtcNow;

                recognizerInput = new SpeechRecognizer();
                recognizerInput.ContinuousRecognitionSession.ResultGenerated += ContinuousRecognitionSession_InputResultGenerated;
                recognizerInput.StateChanged += InputRecognizerStateChanged;
                recognizerInput.Timeouts.BabbleTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.ContinuousRecognitionSession.Completed += ContinuousRecognitionSession_Completed;
                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(SPEECH_TIMEOUT);
                recognizerInput.Constraints.Clear();

                if (grammarFile != null)
                {
                    StorageFile grammarContentFile = await Package.Current.InstalledLocation.GetFileAsync(grammarFile);
                    SpeechRecognitionGrammarFileConstraint grammarConstraint = new SpeechRecognitionGrammarFileConstraint(grammarContentFile);
                    recognizerInput.Constraints.Add(grammarConstraint);
                }

                var compilationResult = await recognizerInput.CompileConstraintsAsync();

                // If successful, display the recognition result.
                if (compilationResult.Status != SpeechRecognitionResultStatus.Success)
                {
                    Debug.WriteLine(" ** VOICEINPUT - VoiceCompilationError - Status: " + compilationResult.Status);
                }

                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.RecognitionQualityDegrading += RecognizerInput_RecognitionQualityDegrading;
                await recognizerInput.ContinuousRecognitionSession.StartAsync();

                System.Threading.SpinWait.SpinUntil(() =>
                    completed
                );
               
                resp = new VoiceResponse() { Response = lastInput, RawConfidence = lastRawConfidence };
               
                try
                {
                    recognizerInput.Dispose();
                    recognizerInput = null;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine("** WaitForText (1) - Dispose ** " + ex.Message);
                }
            }
            catch (Exception ex2)
            {
                Debug.WriteLine("** WaitForText ** " + ex2.Message);
            }
            return resp;
        }

        private void RecognizerInput_RecognitionQualityDegrading(SpeechRecognizer sender, SpeechRecognitionQualityDegradingEventArgs args)
        {
            try
            {
                Debug.WriteLine("VOICE INPUT - QUALITY ISSUE: " + args.Problem.ToString());
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** VOICE INPUT - RecognizerInput_RecognitionQualityDegrading ** " + ex.Message);
            }
        }

        private void ContinuousRecognitionSession_Completed(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionCompletedEventArgs args)
        {
            if (args.Status == SpeechRecognitionResultStatus.Success
                || args.Status == SpeechRecognitionResultStatus.TimeoutExceeded)
                success = true;
            completed = true;
           
        }

        private void ContinuousRecognitionSession_InputResultGenerated(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionResultGeneratedEventArgs args)
        {
            try
            {
                lastInput = "";
                if ((args.Result.Text ?? "").Length > 0)
                {
                    lastInput = args.Result.Text;
                    lastRawConfidence = args.Result.RawConfidence;
                    Debug.WriteLine(" " + lastInput);
                }
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** ContinuousRecognitionSession_InputResultGenerated ** " + ex.Message);
            }
        }

        private void InputRecognizerStateChanged(SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args)
        {
            Debug.WriteLine("  Input Speech recognizer state: " + args.State.ToString());
        }
    }
}

For example, if you want to wait for a “yes/no” confirmation, with a 3 second timeout, you would call the above code as such:

var yesNoResponse = await (new VoiceInput()).WaitForText(3, YESNO_FILE);

And the yes/no grammar file looks like this:

<?xml version="1.0" encoding="utf-8" ?>
<grammar
  version="1.0"
  xml:lang="en-US"
  root="enzoCommands"
  xmlns="http://www.w3.org/2001/06/grammar"
  tag-format="semantics/1.0">

  <rule id="root">
    <item>
      <ruleref uri="#enzoCommands"/>
      <tag>out.command=rules.latest();</tag>
    </item>
  </rule>

  <rule id="enzoCommands">
    <one-of>
      <item> yes </item>
      <item> yep </item>
      <item> yeah </item>
      <item> no </item>
      <item> nope </item>
      <item> nah </item>
    </one-of>
  </rule>

</grammar>

Calling Enzo Unified using HTTPS to Add a SharePoint Item

Another important part of the code is its ability to interact with other services through Enzo Unified, so that no SDK is needed on the UWP application. For an overview on how to access SharePoint Online through Enzo Unified, see this previous blog post.

The following code shows how to easily add an item to a SharePoint list through Enzo Unified. Posting this request to Enzo requires two parameters (added as headers) called “name” and “data” (data is an XML string containing the column names and values to be added as a list item).

public static async Task SharePointAddItem(string listName, string item)
{
            string enzoCommand = "/bsc/sharepoint/addlistitemraw";
            List<KeyValuePair<string, string>> headers = new List<KeyValuePair<string, string>>();

            string data = string.Format("<root><Title>{0}</Title></root>", item);

            headers.Add(new KeyValuePair<string, string>("name", listName));
            headers.Add(new KeyValuePair<string, string>("data", data));

            await SendRequestAsync(HttpMethod.Post, enzoCommand, headers);
}

And the SendRequestAsync method below shows you how to call Enzo Unified. Note that I added two cache control filters to avoid HTTP caching, and additional flags for calling Enzo Unified on an HTTPS port where a self-signed certificate is installed.

private static async Task<string> SendRequestAsync(HttpMethod method, string enzoCommand, List<KeyValuePair<string, string>> headers)
{
            string output = "";
            var request = EnzoUnifiedRESTLogin.BuildHttpWebRequest(method, enzoCommand, headers);
           
            var filter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
            if (IGNORE_UNTRUSTEDCERT_ERROR)
            {
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.Untrusted);
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.InvalidName);
            }
            filter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
            filter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;

            Windows.Web.Http.HttpClient httpClient = new Windows.Web.Http.HttpClient(filter);

            try
            {
                using (var response = await httpClient.SendRequestAsync(request))
                {
                    output = await response.Content.ReadAsStringAsync();
                }
            }
            catch (Exception ex)
            {
                System.Diagnostics.Debug.WriteLine(" ** Send Http request error: " + ex.Message);
            }
            return output;
}

Last but not least, the BuildHttpWebRequest method looks like this; it ensures that the proper authentication headers are added, along with the authentication identifier for Enzo:

public static Windows.Web.Http.HttpRequestMessage BuildHttpWebRequest(Windows.Web.Http.HttpMethod httpmethod, string uri, List<KeyValuePair<string,string>> headers)
{
            bool hasClientAuth = false;

            Windows.Web.Http.HttpRequestMessage request = new Windows.Web.Http.HttpRequestMessage();

            request.Method = httpmethod;
            request.RequestUri = new Uri(ENZO_URI + uri);

            if (headers != null && headers.Count() > 0)
            {
                foreach (KeyValuePair<string, string> hdr in headers)
                {
                    request.Headers[hdr.Key] = hdr.Value;
                }
            }

            if (!hasClientAuth)
                request.Headers["authToken"] = ENZO_AUTH_GUID;

            return request;
}

Text to Speech

There is also the Text to Speech aspect, where the system speaks back what it heard, before confirming and acting on the command. Playing back is actually a bit strange in the sense that it requires a UI thread. In addition, it seems that Windows 10 IoT Core and Raspberry Pi don’t play nice together; it seems that every time a playback occurs, a loud tick can be heard before and after. A solution appears to be using USB speakers, but none worked for me. The code below simply plays back a specific text and waits a little while in an attempt to give enough time for the playback to finish (the code is non-blocking, so the SpinWait attempts to block the code until completion of the playback).

private async Task Say(string text)
{
            SpeechSynthesisStream ssstream = null;

            try
            {
                SpeechSynthesizer ss = new SpeechSynthesizer();
                ssstream = await ss.SynthesizeTextToStreamAsync(text);
            }
            catch (Exception exSay)
            {
                Debug.WriteLine(" ** SPEECH ERROR (1) ** - " + exSay.Message);
            }

            var task1 = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
            {
                try
                {
                    await media.PlayStreamAsync(ssstream);
                }
                catch (Exception exSay)
                {
                    Debug.WriteLine(" ** SPEECH ERROR (2) ** - " + exSay.Message);
                }
            });

            // Wait a little for the speech to complete
            System.Threading.SpinWait.SpinUntil(() => 1 == 0, lastInput.Length * 150);

}

Calling the above code is trivial:

await Say("I am listening");

 

Python

The code in python was trivial to build; this RPi was responsible for monitoring events in the Azure Service Bus and turning on/off the LED attached to it. The following pseudo code shows how to call Enzo Unified from Python without using any SDK:

import sys
import urllib
import urllib2
import requests

enzourl_receiveMsg=”http://…/bsc/azurebus/receivedeletefromsubscription”
enzo_guid=”secretkeygoeshere”
topicName=”enzoiotdemo-general”
subName=”voicelight”

while 1=1
   try:
      headers={‘topicname’:topicName,
         ‘authToken’:enzoguid,
         ‘subname’:subName,
         ‘count’:1,
         ‘timeoutSec’:1
      }
      response=requests.get(enzourl_receiveMsg,headers=headers)
      resp=response.json()
      if (len(resp[‘data’][‘Table1’]) > 0
         #extract response here…

 

Conclusion

This prototype demonstrated that while there were a few technical challenges along the way, it was relatively simple to build a speech recognition engine that can understand commands using Windows 10 IoT Core, .NET, and the Microsoft Speech Recognition SDK. 

Further more, the intent of this project was also to demonstrate that Enzo Unified made it possible to code against multiple services without the need for an SDK on the client side regardless of the platform and the development language.  Abstracting SDKs through simple HTTP calls makes it possible to access Twilio, SharePoint Online, Azure services and much more without any additional libraries on the client system.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.


          So you want to go Causal Neo4j in Azure? Sure we can do that   

Originally posted on: http://geekswithblogs.net/cskardon/archive/2017/04/26/so-you-want-to-go-causal-neo4j-in-azure-sure.aspx

So you might have noticed in the Azure market place you can install an HA instance of Neo4j – Awesomeballs! But what about if you want a Causal cluster?

image

Hello Manual Operation!

Let’s start with a clean slate, typically in Azure you’ve probably got a dashboard stuffed full of other things, which can be distracting, so let’s create a new dashboard:

image

Give it a natty name:

image

Save and you now have an empty dashboard. Onwards!

To create our cluster, we’re gonna need 3 (count ‘em) 3 machines, the bare minimum for a cluster. So let’s fire up one, I’m creating a new Windows Server 2016 Datacenter machine. NB. I could be using Linux, but today I’ve gone Windows, and I’ll probably have a play with docker on them in a subsequent post…I digress.

image

At the bottom of the ‘new’ window, you’ll see a ‘deployment model’ option – choose ‘Resource Manager’

image

Then press ‘Create’ and start to fill in the basics!

image

  • Name: Important to remember what it is, I’ve optimistically gone with 01, allowing me to expand all the way up to 99 before I rue the day I didn’t choose 001.
  • User name: Important to remember how to login!
  • Resource group: I’m creating a new resource group, if you have an existing one you want to use, then go for it, but this gives me a good way to ensure all my Neo4j cluster resources are in one place.

Next, we’ve got to pick our size – I’m going with DS1_V2 (catchy) as it’s pretty much the cheapest, and well – I’m all about being cheap.

image

You should choose something appropriate for your needs, obvs. On to settings… which is the bulk of our workload.

image

I’m creating a new Virtual Network (VNet) and I’ve set the CIDR to the lowest I’m allowed to on Azure (10.0.0.0/29) which gives me 8 internal IP addresses – I only need 3, so… waste.

image

I’m leaving the public IP as it is, no need to change that, but I am changing the Network Security Group (NSG) as I intend on using the same one for each of my machines, and so having ‘01’ on the end (as is default) offends me Smile

image

Feel free to rename your diagnostics storage stuff if you want. The choice as they say – is yours.

Once you get the ‘ticks’ you are good to go:

image

It even adds it to the dashboard… awesomeballs!

image

Whilst we wait, lets add a couple of things to the dashboard, well, one thing, the Resource group, so view the resource groups (menu down the side) and press the ellipsis on the correct Resource group and Pin to the Dashboard:

image

So now I have:

image

After what seems like a lifetime – you’ll have a machine all setup and ready to go – well done you!

image

Now, as it takes a little while for these machines to be provisioned, I would recommend you provision another 2 now, the important bits to remember are:

  • Use the existing resource group:
    image
  • Use the same disk storage
  • Use the same virtual network
  • Use the same Network Security Group
    image

BTW, if you don’t you’re only giving yourself more work, as you’ll have to move them all to the right place eventually, may as well do it in one!

Whilst they are doing their thing, let’s setup Neo4j on the first machine, so let’s connect to it, firstly click on the VM and then the ‘connect’ button

image

We need two things on the machine

  1. Neo4j Enterprise
  2. Java

The simplest way I’ve found (provided your interwebs is up to it) is to Copy the file on your local machine, and Right-Click Paste onto the VM desktop – and yes – I’ve found it works way better using the mouse – sorry CLI-Guy

Once there, let’s install Java:

image

Then extract Neo4j to a comfy location, let’s say, the ‘C’ drive, (whilst we’re here… !Whaaaaat!!??? image 

an ‘A’ drive? I haven’t seen one of those for at least 10 years, if not longer).

Anyways - extracted and ready to roll:

image

UH OH

image

Did you get ‘failed’ deployments on those two new VMs? I did – so I went into each one and pressed ‘Start’ and that seemed to get them back up and running.

#badtimes

(That’s right – I just hashtagged in a blog post)

Anyways, we’ve now got the 3 machines up and I’m guessing you can rinse and repeat the setting up of Java and Neo4j on the other 2 machines. Now.

To configure the cluster!

We need the internal IPs of the machines, we can run ‘IpConfig’ on each machine, or just look at the V-Net on the portal and get it all in one go:

image

So, machine number 1… open up ‘neo4j.conf’ which you’ll find in the ‘conf’ folder of Neo4j. Ugh. Notepad – seriously – it’s 2017, couldn’t there be at least a slight  improvement in notepad by now???

I’m not messing with any of the other settings, purely the clustering stuff – in real life you would probably configure it a little bit more. So I’m setting:

  • dbms.mode
    • CORE
  • causal_clustering.initial_discovery_members
    • 10.0.0.4:5000,10.0.0.5:5000;10.0.0.6:5000

I’m also uncommenting all the defaults in the ‘Causal Clustering Configuration’ section – I rarely trust defaults. I also uncomment

  • dbms.connectors.default_listen_address

So it’s contactable externally. Once the other two are setup as well we’re done right?

HA No chance! Firewalls – that’s right in plural. Each machine has one – which needs to be set to accept the ports:

5000,6000,7000,7473,7474,7687

image

Obviously, you can choose not to do the last 3 ports and be uncontactable, or indeed choose any combo of them.

Aaaand, we need to configure the NSG:

image

I have 3 new ‘inbound’ rules – 7474 (browser), 7687 (bolt), 7000 – Raft.

Right. Let’s get this cluster up and contactable.

Log on to one of your VMs and fire up PowerShell (in admin mode)

image

First we navigate to the place we installed Neo4j (in my case c:\neo4j\neo4j-enterprise-3.1.3\bin) and then we import the Neo4j-Management module. To do this you need to have your ExecutionPolicy set appropriately. Being Lazy, I have it set to ‘Bypass’ (Set-ExecutionPolicy bypass).

Next we fire up the server in ‘console’ mode – this allows us to see what’s happening, for real situations – you’re going to install it as a service.

You’ll see the below initially:

image

and it will sit like that until the other servers are booted up. So I’ll leave you to go do that now…

Done?

Good – now, we need to wait a little while for them to negotiate amongst themselves, but after a short while (let’s say 30 secs or less) you should see:

image

Congratulations! You have a cluster!

Logon to that machine via the IP it says, and you’ll see the Neo4j Browser, login and then run

:play sysinfo

image

You could now run something like:

Create (:User {Name:'Your Name'})

And then browse to the other machines to see it all nicely replicated.


          Reading the log on SQL for Linux   
The SQL Server errorlog is a really helpful place to find all sorts of fun facts about your SQL Server instance. As last checked today (6/29/2017) the latest CTP build seems to have problems reading t ... - Source: www.sqlservercentral.com
          The June 2017 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the June 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          PCLinuxOS Roll-Up Release: Another Linux installed on my new notebook   

I happened across a new ISO image for the PCLinuxOS KDE desktop distribution this week.

PCLOS is a classic rolling-release Linux distribution, so this is just a "roll-up" release, pulling all of the updates since the last release together to make new installations easier, faster and more reliable.

Every time a new rolling release update is made we talk about those first two points, how it makes installation faster and easier, but I think my recent experience with two ASUS laptops shows that the last point can be important as well.

Read more


          The May 2017 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the May 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          PCLOS and blackPanther OS   
  • Time for a change

    Three days ago, I decided to abandon my efforts to rescue my PCLOS KDE4 install, which was destroyed by a connection disruption while updating. I lost my connection for over a week and, when my ISP finally solved the problem, my desktop was so messed up that I gave up on it and decided to give PCLOS KDE5 a chance.

    I must confess that I am not a real fan of Plasma 5. However, as KDE4 is going the way of the dodo, I thought that it was better to take the leap and see how this beautiful Linux distro works with KDE's new desktop.

  • LinuxAndUbuntu Distro Review Of The Week blackPanther OS

    blackPanther OS is a Hungarian Linux distro. It takes out many features from other famous distros like GUI from fedora, drivers from Ubuntu and many others. The website of blackPanther OS states that:- “The blackPanther OS development started in 2002 by Charles K. Barcza. The First public version was 1.0 (Codename: Shadow) in 2003. Since then, the development is continuous, every year a new version is released. The last stable version, v16.1.2 has become available in Aug. of 2016. (The v16.2 is a special, non-free release, and v17.1 still under development) It was among the 5 top popular distributions January of 2010 on distrowatch.”


           PCLinuxOS 2017.03   

It has been about a year since I last explored the PCLinuxOS distribution. At that time I was experimenting with the project's MATE edition. Since I have not taken the chance to try PCLinuxOS since the distribution launched an edition with the KDE Plasma 5 desktop environment, I thought it would be fun to revisit this project. PCLinuxOS currently ships with version 5.8 of the Plasma desktop which is a long term support release of Plasma. The ISO file I downloaded for PCLinuxOS was 1.3GB in size.

Booting from the distribution's live media brings up a menu asking how we would like to launch the operating system. We can choose to launch PCLinuxOS with a graphical desktop with the default settings, load the desktop with safe mode graphics settings, boot to a text console or launch the project's system installer. Taking one of the live desktop options soon brings up a window asking us to select our keyboard's layout from a list. Then the Plasma desktop loads. PCLinuxOS has a varied and colourful wallpaper. There are icons on the desktop which open the Dolphin file manager and launch the system installer. At the bottom of the screen we find a panel which houses the application menu, a few quick-launch buttons, a task switcher and the system tray.

Read more


          The April 2017 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the April 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          The March 2017 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the March 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          Recreating the PCLinuxOS Full Monty with KDE Plasma Activities   

When I recently wrote about the new PCLinuxOS release, I was a bit disappointed to find that the Full Monty version had been laid to rest. I'm sure there were a lot of good reasons for this decision, and I have no quarrel with it. But it still made me a bit sad, because I have always kept the Full Monty on at least one of my systems (it is currently on my Acer All-In-One desktop), and I often showed it to people who were curious about Linux, as an example of its breadth, depth and flexibility.

So I decided that it might be a useful exercise for me to try to create the equivalent of the Full Monty desktop starting from the latest PCLinuxOS KDE5 distribution. There are two major features which distinguish the Full Monty desktop - it had six virtual desktops, each of which was dedicated to a specific use, and it had lots and lots and lots of packages installed. The desktops looked like this:

Read more


          Hands-on: New PCLinuxOS installation images   

PCLinuxOS is an independent distribution, it is not derived from or dependent upon any other current distribution. It is a rolling-release distribution, so it gets a steady flow of updates rather than having periodic point-releases. There was a time when the intention was to update the PCLinuxOS distribution images quarterly, but it seems that turned out to be too much work for too little return.

Read more


          The February 2017 issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the February 2017 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          The December 2016 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the December 2016 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          The November 2016 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the November 2016 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          The October 2016 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the October 2016 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          The September 2016 Issue of the PCLinuxOS Magazine   

The PCLinuxOS Magazine staff is pleased to announce the release of the September 2016 issue. With the exception of a brief period in 2009, The PCLinuxOS Magazine has been published on a monthly basis since September, 2006. The PCLinuxOS Magazine is a product of the PCLinuxOS community, published by volunteers from the community.


          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          L3 Linux Operations - Morgan Stanley - Montréal, QC   
Must be able to read, understand and write intermediate to complex scripts using KSH, Bash, Perl, Python etc. Job....
From Morgan Stanley - Tue, 20 Jun 2017 18:06:31 GMT - View all Montréal, QC jobs
          Comment on Sayang Anak by Melvin   
Skype has launched its online-based client beta for the world, after starting it generally from the United states and You.K. earlier this month. Skype for Website also now can handle Chromebook and Linux for immediate online messaging connection (no voice and video yet, those require a connect-in set up). The expansion in the beta brings help for an extended list of spoken languages to help you bolster that worldwide user friendliness
          小白用Linux_5   

最后的这篇想单独写一点Deepin Linux,之所以把这篇放在最后是因为Deepin实在太好用太方便了,以至于如果一开始就讲的话后面几篇差不多就不用写了。

当然,虽然现在Deepin的普及度不是很高(废话,Linux哪个普及度高?)相对于Mint,Deepin更加符合国人习惯,其实我以前用过很多发行版,包括Mint在内的所有发行版都没有Deepin给我的这种开箱即用的感觉。甚至我在学习Arch的过程中也是花了很长时间安装,配置,调试最后我发现最终我想要的结果竟然跟​Deepin几乎一模一样(Dock,搜狗输入法,浏览器,WPS)。

​不同于其他的各种国产Linux只是给Ubuntu换个壳开发几个万年历小软件,Deepin一开始就积极在国际上推广并且建立了桌面环境开发团队,可以感觉到深度一开始就很诚意的在打造一个普通人(非程序猿)都可以轻易使用的Linux系统。他们的努力当然也得到所有用户的肯定,包括老外对Deepin也是赞扬有加,前段时间的distrowatch.com短期排名甚至升到了第二位。

​我强烈推荐刚入门Linux的朋友们使用深度。

安装教程网上给的非常详细,我这里就不写了,直接上官方链接

最后,一些进入Linux世界想要更加精进的同学们推荐的入门书籍(网上都可以搜到并下载):

《鸟哥的Linux私房菜》

《Python简明教程》​

Vim简明教程》​

廖雪峰的Git教程


 
          小白用Linux_4   

这篇我们假设你已经安装好了Mint Linux18, 如果没有,可以参考前面我写的安装教程。

安装完成之后,进入桌面,你会觉得这玩意跟windows没啥不一样的嘛,右下角有开始菜单,里面能找到常用的工具,浏览器,​文件管理器,桌面还有“计算机”。没错,乍一看跟windows还真的很像,甚至有些快捷键都一样,比如win是开始菜单,win+e是打开文件管理器,但是用一会你就会发现,怎么这么别扭,输入法怎么换,我想用的XXX软件怎么安装?这篇文章就专门说说常用软件的安装和使用问题。

首先我们要用的肯定就是浏览器,输入法,文档编辑器这三样啦,浏览器这种通用性最高的程序在Mint Linux的标配是火狐Firefox,相信很多人在windows下也用过,但是说实话自从chrome出来之后我就不怎么用火狐了,一打开还是有点不习惯,先装个chrome吧!

Linux系统有个软件源的概念,有点像是windows下常见的那些某某软件中心,这里的软件源我们可以暂时理解为系统和软件的服务器,所以你连接服务器的速度就决定了你下载软件和更新系统的速度。前面安装教程里提到过,最好不要在安装的时候选择在线更新,因为默认的软件源会比较慢,我们可以在"系统管理->软件源"中替换成国内的软件源。基本上,所有常用的软件都可以从这里面找到,可以通过命令行也可以通过图形界面来安装,至于那些比较新的软件或者不太常见的软件我们后面会说。

替换过软件源之后,点以下右上角的“更新缓存”这样会刷新以下你可以用的软件列表。接下来我们在“系统管理->软件管理器”里面找我们想要的浏览器,输入“chrome”搜索,木有结果?这是怎么回事?因为chrome并不是开源的,而是谷歌单独发布的,由于众所周知的原因,我们暂时无法直接从谷歌官网上安装chrome浏览器,但是我们有一款替代的开源浏览器作为替代,就是Chromium,它是chrome的开源版本,安装它就对啦,双击之后就是安装,全程不需要你点“下一步下一步下一步”,顺便说下,如果你不想用了,卸载也是可以在软件管理器里面卸载的哦。

安装完浏览器之后就轮到我们的输入法了,搜狗拼音目前还没有被加入到Ubuntu官方软件源中(为什么是Ubuntu软件源?那是因为mint是基于ubuntu的系统)所以我们需要单独到搜狗拼音官方网站上下载linux版的:http://pinyin.sogou.com/linux/?r=pinyin 下载下来之后是一个以deb作为后缀的文件,这种文件就是所谓的软件包啦,他们可以直接在Debian系的操作系统上安装,我们的mint就是输入Debian系的,Linux还有其他的分支,留给大家当做课后作业了。双击下载的文件,安装,如果一切正常,搜狗拼音就可以直接使用了。

Linxu下我强烈推荐使用大名鼎鼎的wps作为文档处理,我们国产的wps完爆什么openoffice和liberoffice​,linux版的wps提供跟windows下几乎一样的操作体验,完全兼容windows下的word文档,没有wps之前我用openoffice编辑的文档或者幻灯片在windows下打开总有各种各样奇奇怪怪的问题,但是wps完全没发生过这种情况,你在linux下做成什么样在windows下打开还是什么样。

​下面是一些windows下常用软件的linux替代方案,大家也可以参考,一般在软件源里都能找到,如果万一找不到,大部分也有deb安装包可以直接安装:

GIMP——图片处理软件,用于替代PS

​网易云音乐——音乐播放器

视频播放我用mint默认的视频软件,基本够用

文本编辑器用sublimeText,免费华丽且好用,唯一的缺点是保存时会随机弹出注册框。我这种手动保存强迫症简直受不了,后来发现了微软的VS Code救了我的命。

使用Linux Mint还有一点要注意的是,Mint是可以识别windows分区的,所以访问windows下的文件没什么问题,但是用windows访问Linux分区的文件就有点麻烦了,所以如果你想要在linux下写一个文档并且有时会切换到windows下打开,那么建议你保存在windows分区里。​



 
          小白用Linux_3   
Linux Mint安装

Linux mint安装,这里推荐使用U盘安装的方式,首先去mint官网下载iso镜像,下载地址在:https://www.linuxmint.com/download.php

这里有几个版本,分别是CinnamonMATEXfce,他们之间的区别在于采用的图形化界面不同,一般来说,Cinnamon比较酷炫,非常适合现代电脑,差不多相当于是win7的桌面,MATE稍微朴实一点,差不多就是xp,Xfce则比较简洁,差不多像是win2000的感觉。相对应的,它们的图形界面消耗的系统资源也依次递减,但是我们的电脑硬件基本上都能满足Cinnamon对硬件的需求,所以这里推荐选择Cinnamon

32位或者64位的选择基于你的CPU,如果你的电脑支持64位一定要选择64位的系统,基本上2010年之后的电脑差不多都支持64位的。

插好U盘,用UltraISO打开iso镜像文件,点击工具栏的“启动”->“写入硬盘映像”,





确认硬盘驱动器一栏里是你的U盘后直接点击“确定”即可。

接下来,我们要在硬盘上分出一部分空间给Linux使用,这里我们可以用分区助手之类的软件,把你的其中一个分区的空间调小一点,空出来的部分50G以上就基本够用了,具体方法可以参考下面的方法:


把电脑启动方式改为U盘启动,插入刚才制作的U盘。然后启动电脑。

稍等之后进入Live 环境,点击桌面的install linux mint 开始安装

选择最后一项,实现多系统共存


在你需要的分区格式化

键盘布局

用户信息

开始安装


过程需要联网下载语言包,此项需要网络支持,但是这里不建议现在更新,因为默认的软件源对于国内网络来说速度比较慢,我们完全可以在安装完成更换软件源之后再回过头来进行更新。

安装完重启

grub2 完美引导三系统

至此,安装结束。


 
          小白用Linux_2   
==================================================
小小吐槽一下:刚才写了快一个小时的文章因为我还是不太习惯新编辑器点了“切换到旧版”居然全部都没了。新浪博客作为一个知名博客网站居然没有自动保存功能么,我真的该考虑在别的地方写博客了。
==================================================
Linux严格意义上指的是Linux内核,内核并不能直接使用,必须配合上各种软件工具才行。所谓发行版,就是事先集成了各种不同的软件工具打包发布的Linux操作系统套件。他们面向的用户或者说是侧重点各不相同,有的主要针对桌面用户,有的针对服务器市场,有的针对便携式U盘系统。distrowatch.com 是一个专业介绍各种Linux发行版的网站,它上面有个非常有意思的板块是各种发行版根据点击量的排名,下面就是2016年8月13日这份榜单的截图:

我们可以看到,Mint以绝对优势排名第一,排名第二的Debian比他少了整整一千多的点击。另外,CentOS和Debian常见于服务器系统,也就是说普通用户心目中Mint是绝对的第一位,至于国内知名度非常高的Ubuntu排名第三。后面的文章我会重点介绍Mint和我们国产的deepin,他们一个是全世界最流行的面向普通桌面用户的发型版,一个是最符合国人使用习惯的发型版。

刚才在网上搜索信息的时候看到一篇2015年Linux榜单前十位的介绍,原文链接在此:http://www.codeceo.com/article/2015-10-linux-distributions.html,转载如下:

10 Top Linux Distributions of 2015

首先,让我们来看看下面的对比表,表中列出了2015年和2014年排名前10位的Linux发行版的情况:

POSITION 2015 2014
1 Linux Mint Linux Mint
2 Debian Ubuntu
3 Ubuntu Debian
4 openSUSE openSUSE
5 Fedora Fedora
6 Mageia Mageia
7 Manjaro Arch
8 CentOS Elementary
9 Arch CentOS
10 Elementary Zorin

正如你所看到的,这一年并没有发生太多或太显著的变化。下面就让我们从后往前地来看看最受欢迎的10个Linux发行版,数据截止时间为2015年12月9日。

10.Elementary OS

其开发人员标榜Elementary OS是“Windows和OS X快速又开放的替代品”,这款漂亮精致基于Ubuntu LTS的桌面Linux发行版,第一版发行于2011年,目前发行的是第三个稳定版本(代号“Freya”)。

由于Elementary OS是基于Ubuntu的,所以它完全兼容代码仓库和软件包。然而,它自己的应用程序管理器,在撰写本文的时候还在开发中。就我个人而言,这是我曾见过的最美观的桌面发行版。

9. Arch Linux

也许Arch最主要的特点之一就是,它是一个独立的开放源代码的发行版(这意味着它不基于任何其他的东西),并且受到了成千上万的Linux用户的喜爱。

由于Arch遵循滚动发布模式,因此你只要使用pacman执行定期的系统更新,就可以获得最新的软件。

传统上来说,不建议新用户使用Arch,主要是因为安装进程不会为你做任何的决定,所以你最好能对Linux相关的概念有一定程度的了解,以便成功的安装软件。

还有一些其他的基于Arch的发行版,如Apricity,Manjaro,Antergos等,更适合那些想要无障碍尝试Arch衍生产品的新手。

8. CentOS

虽然社区企业操作系统(Community ENTerprise Operating System)是Linux服务器最有名最常用的发行版,但是它的桌面版本还在继续不断完善中。

另外,它的稳健性、稳定性、和100%二进制兼容性,也使之成为了Red Hat Enterprise Linux的头号劲敌——特别是对云VPS供应商——也许这就是发行版持续增长的主要原因之一。

7. Manjaro

基于Arch Linux的Manjaro,目标在于利用让Arch广泛发行的功能的优势,同时提供一个更舒适的安装和运行体验,无论是新手还是有经验的Linux用户,都可以开箱即用。

Manjaro预装了桌面环境,图形应用程序(包括软件中心)和用于播放音频和视频的多媒体解码器。

6. Mageia

始于2010年,作为现在已经消失的Mandriva Linux的衍生品,受非盈利性组织支持的Mageia自那时起,成为了台式机和服务器著名的、安全的、稳定的Linux发行版。

Mageia最有趣的功能之一就是,它的全安装DVD允许你在在多种桌面环境中选择,而不是强加一个给你。

截至今日,Mageia新版本每9个月发布,可支持未来一年半的时间。

5. Fedora

基于Fedora Project(Red Hat支持),世界性社区范围的志愿者和开发人员的构建和维护,Fedora之所以能够持续几年成为使用最广泛的发行版之一,是因为它有三个主要的可用版本(Workstation (用于台式机) ,Server edition和Cloud image),以及ARM版本用于基于ARM(通常为headless)的服务器。

不过,也许Fedora最显着的特点是,它总是在领衔整合新的软件包版本和技术到发行版中。此外, Red Hat Enterprise Linux和CentOS的新版本基于Fedora。

4. openSUSE

既可作为一个滚动发布,又可当作是一个独立的定期发布版本,openSUSE根据其开发人员的不同,是系统管理员、开发人员和桌面用户Linux发行版的选择,无论你的经验水平处于哪种级别(受到初学者和极客们的一致好评)。最重要的是,著名又屡获殊荣的SUSE Linux Enterprise产品基于openSUSE。

3. Ubuntu

也许这一发行版并不需要任何介绍。 Canonical,Ubuntu背后的公司,一直致力于使Ubuntu成为一个流行和普遍的发行版,并且现在你可在智能手机、平板电脑、个人电脑、服务器和云VPS的上面看到Ubuntu的身影。

此外,Ubuntu基于Debian,并且是一款非常受新用户欢迎的发行版——这可能就是Ubuntu在一段时间内持续增长的原因。虽然没有考虑到这个排名,但Ubuntu是其他Canonical系列发行版,如Kubuntu、Xubuntu、Lubuntu的基础。

最重要的是,安装映像包含Try Ubuntu功能,可以让你在硬盘真正安装之前尝试Ubuntu。现在只有为数不多的几个重要的发行版提供类似这样的功能。

2. Debian

作为一个坚如磐石的Linux发行版,Debian每2年发布新的稳定版本,并且你放心,每个版本都已经过彻底的测试。

在写这篇文章的时候,Debian代码仓库中当前的稳定版本(代号Jessie)总共包含43500个包,这使得它成为了最完整的Linux发行版之一。

虽然它主要用于服务器上,但现在它的桌面版本已经在功能和外观上得到了明显的改善。

1. Linux Mint

Linux Mint的著名口号(“From freedom came elegance”),不只是说说而已。基于Ubuntu的Linux Mint,是一个稳定、功能强大、完整、易于使用的Linux发行版——我们还有很多很多的褒义词可以用来形容Mint。

Mint最显著的特点之一是,在安装过程中你被允许从一个列表中选择桌面环境,并且你可以放心,一旦它安装完了之后,你还能播放音乐和视频文件,而无需任何额外的配置步骤,因为标准安装提供了多媒体解码器的开箱即用。

注:本文中的截图取自每个发行版的网站。





 
          小白用Linux_0&1   

0

很久没有更新博客了,一方面是博客渐渐被微博微信等其他的社交媒体取代,流行度远远不如从前,另一方面我个人的工作和生活也有很多重大变化,写东西的热情也大大不如以往。但是每天看着微信里3分钟就读完的短文,听着罗辑思维一天60秒的小相声,也许听过看过很快就忘了,完全没有以前自己总结分享的快乐。这也是我决心重新开始更新的源动力。

​《小白用Linux》这个系列我打算以介绍这些简单易用的发行版为主要内容,下一节开始,我会简单写一下学习Linux的好处,然后以Mint Linux为例介绍Linux的安装使用,并且会推荐一些我经常用的Linux软件。

1

最近Win10停止了免费升级,我想很多人还是跟以前一样等着网上各种破解版,或者继续坚守Win7甚至XP。但是大家也别忘了那句“出来混,总是要还的”。人家微软的系统是要卖钱的,反盗版的手段只会越来越严厉,大家都记得几年前的windows黑屏保事件吧,连屏保都可以换同时自然也可以删光你的所有资料。如果那天微软悄悄更新一个补丁把所有盗版用户的数据全部加密或者直接删除也不是做不到。更别提windows平台下各种木马病毒流氓软件,把重要资料甚至隐私保存在这样的环境下真的好么?

相对而言,开源免费的Linux就好多了,所有的源代码都是公开的不用担心有后门,Linux系统的安全稳定性也是公认的,更不用担心盗版方面的问题。根据我最近两年使用Linux的经验来看,即使一个对Linux一窍不通的小白用户也完全可以驾驭,比如我们国产的deepin深度Linux和国际上大名鼎鼎的Ubuntu​​和Mint都是非常符合Windows下操作习惯的发行版。它们的学习成本非常低,几乎不需要任何专业知识就能满足日常使用的需求,浏览网页,编辑文档,听音乐看电影,学习编程都跟windows平台下没有任何区别。当然,如果你想要学习计算机编程有关的知识那么Linux与程序员有天生的亲和度,它允许你修改任何你想要修改的地方,你的唯一的限制就只有你的想象力。

Linux平台真正的短板不得不说还是游戏资源太少,如果想在Linux下玩LoL或者魔兽世界虽然不是完全不可能但是恐怕有点费劲。

关于Linux系统的起源和一些背景知识有兴趣的话可以在百度百科里看看(链接在此),不看也没关系,反正我们也不用关心^_^,下一节我会介绍一些常见的Linux发行版以及他们的特点。


 
          ubuntu apache2配置详解(含虚拟主机配置方法)   

网上查到的是Apache2.2的配置,而Apache2.4使用相同配置后不能访问,出现“apache AH01630: client denied by server configuration” 这时只要把

  1. Order deny,allow  
  2. Allow from all  
替换为
  1. Require all granted  

即可。

===============================================================

在Windows下,Apache的配置文件通常只有一个,就是httpd.conf。但我在Ubuntu Linux上用apt-get install apache2命令安装了Apache2后,竟然发现它的httpd.conf(位于/etc/apache2目录)是空的!进而发现Ubuntu的 Apache软件包的配置文件并不像Windows的那样简单,它把各个设置项分在了不同的配置文件中,看起来复杂,但仔细想想设计得确实很合理。

严格地说,Ubuntu的Apache(或者应该说Linux下的Apache?我不清楚其他发行版的apache软件包)的配置文件是 /etc/apache2/apache2.conf,Apache在启动时会自动读取这个文件的配置信息。而其他的一些配置文件,如 httpd.conf等,则是通过Include指令包含进来。在apache2.conf中可以找到这些Include行:

# Include module configuration:
Include /etc/apache2/mods-enabled/*.load
Include /etc/apache2/mods-enabled/*.conf

# Include all the user configurations:
Include /etc/apache2/httpd.conf

# Include ports listing
Include /etc/apache2/ports.conf
……
# Include generic snippets of statements
Include /etc/apache2/conf.d/

# Include the virtual host configurations:
Include /etc/apache2/sites-enabled/

结合注释,可以很清楚地看出每个配置文件的大体作用。当然,你完全可以把所有的设置放在apache2.conf或者httpd.conf或者任何一个配置文件中。Apache2的这种划分只是一种比较好的习惯。

安装完Apache后的最重要的一件事就是要知道Web文档根目录在什么地方,对于Ubuntu而言,默认的是/var/www。怎么知道的呢? apache2.conf里并没有DocumentRoot项,httpd.conf又是空的,因此肯定在其他的文件中。经过搜索,发现在 /etc/apache2/sites-enabled/000-default中,里面有这样的内容:

NameVirtualHost *

ServerAdmin webmaster@localhost

DocumentRoot /var/www/
……

这是设置虚拟主机的,对我来说没什么意义。所以我就把apache2.conf里的Include /etc/apache2/sites-enabled/一行注释掉了,并且在httpd.conf里设置DocumentRoot为我的用户目录下的某 个目录,这样方便开发。

再看看/etc/apache2目录下的东西。刚才在apache2.conf里发现了sites-enabled目录,而在 /etc/apache2下还有一个sites-available目录,这里面是放什么的呢?其实,这里面才是真正的配置文件,而sites- enabled目录存放的只是一些指向这里的文件的符号链接,你可以用ls /etc/apache2/sites-enabled/来证实一下。所以,如果apache上配置了多个虚拟主机,每个虚拟主机的配置文件都放在 sites-available下,那么对于虚拟主机的停用、启用就非常方便了:当在sites-enabled下建立一个指向某个虚拟主机配置文件的链 接时,就启用了它;如果要关闭某个虚拟主机的话,只需删除相应的链接即可,根本不用去改配置文件。

======================================================

mods-available、mods-enabled和上面说的sites-available、sites-enabled类似,这两个目录 是存放apache功能模块的配置文件和链接的。当我用apt-get install php5安装了PHP模块后,在这两个目录里就有了php5.load、php5.conf和指向这两个文件的链接。这种目录结果对于启用、停用某个 Apache模块是非常方便的。

最后一个要说的是ports.conf,这里面设置了Apache使用的端口。如果需要调整默认的端口设置,建议编辑这个文件。或者你嫌它实在多 余,也可以先把apache2.conf中的Include /etc/apache2/ports.conf一行去掉,在httpd.conf里设置Apache端口。

ubuntu里缺省安装的目录结构很有一点不同。在ubuntu中module和 virtual host的配置都有两个目录,一个是available,一个是enabled,available目录是存放有效的内容,但不起作用,只有用ln 连到enabled过去才可以起作用。对调试使用都很方便,但是如果事先不知道,找起来也有点麻烦。

/etc/apache2/sites-available 里放的是VH的配置,但不起作用,要把文件link到 sites-enabled 目录里才行。

  

        ServerName 域名  

 

        DocumentRoot 把rails项目里的public当根目录  

          

                Options ExecCGI FollowSymLinks  

                AllowOverride all  

                allow from all  

                Order allow,deny  

          

        ErrorLog /var/log/apache2/error-域名.log  

 

====================================================

 

什么是 Virtual Hosting(虚拟主机)?
简单说就是同一台服务器可以同时处理超过一个域名(domain)。假设www.example1.net和 www.example2.net两个域名都指向同一服务器,WEB服务器又支持Virtual Hosting,那么www.example1.net和www.example2.net可以访问到同一服务器上不同的WEB空间(网站文件存放目 录)。

 

配置格式

在Apache2中,有效的站点信息都存放在/etc/apache2/sites-available/用户名(文件) 里面。 我们可以添加格式如下的信息来增加一个有效的虚拟空间,将default里的大部分东西拷贝过来就行了,记得改DocumentRoot作为默认目录,在Directory中设置路径,注意端口号不要与其他的虚拟主机重复:

# 在ServerName后加上你的网站名称

ServerName  www.demo.com


# 在ServerAdmin后加上网站管理员的电邮地址,方便别人有问题是可以联络网站管理员。

ServerAdmin fish@demo.com

# 在DocumentRoot后加上存放网站内容的目录路径(用户的个人目录)

DocumentRoot /home/fish/www/html

Options Indexes FollowSymLinks MultiViews

Require all granted


ErrorLog /home/fish/www/html/error.log

# Possible values include: debug, info, notice, warn, error, crit,

# alert, emerg.

LogLevel warn

CustomLog /home/fish/www/html/access.log combined

ServerSignature On


如果你的服务器有多个IP,而不同的IP又有着不一样的虚拟用户的话,可以修改成:


...

启用配置

前面我们配置好的内容只是“有效”虚拟主机,真正发挥效果的话得放到 /etc/apache2/sites-enabled 文件夹下面。我们可以使用ln命令来建立一对关联文件:

sudo ln -s /etc/apache2/sites-available/www.demo.com.conf /etc/apache2/sites-enabled/www.demo.com.conf 

检查语法,重启web服务

谨慎起见,我们在重启服务前先检查下语法:


sudo apache2ctl configtest

没有错误的话,再重启Apache


sudo /etc/init.d/apache2 -k restart

 



 
          Wing FTP Server For Linux(64bit) 4.9.1   
Secure and powerful FTP server software for Windows, Linux, Mac OSX and Solaris
          Comment on What’s New in C# 7.0 by Anand Kulkarni SG   
I tested the simple thread support on Linux using the latest 2.0.0-preview2-download and it seems the thread abort functionality is not supported on the Linux platform. When app is run https://msdn.microsoft.com/en-us/library/aa645740(v=vs.71).aspx Example 1. the oThread.abort() dumps a core file.
          Comment on What Does Homeowners Insurance Actually Cover? by Brook   
Skype has launched its internet-structured customer beta to the entire world, right after introducing it broadly in the U.S. and You.K. previous this month. Skype for Internet also now facilitates Chromebook and Linux for instant messaging conversation (no voice and video yet, individuals require a plug-in installing). The expansion of your beta brings assistance for an extended selection of languages to help reinforce that international usability
          Comment on Financial Literacy: Reading A Financial Statement by Karol   
Skype has opened its online-structured customer beta on the entire world, soon after introducing it broadly within the U.S. and You.K. previous this 30 days. Skype for Web also now works with Linux and Chromebook for instant online messaging communication (no voice and video however, those call for a plug-in installation). The expansion in the beta contributes help for an extended selection of dialects to aid strengthen that international usability
          Perangkat Lunak Komputer (Software)   
Perangkat Lunak (software) merupakan suatu program yang dibuat oleh pembuat program untuk  menjalankan perangkat keras komputer. Perangkat Lunak adalah program yang berisi kumpulan instruksi untuk melakukan proses pengolahan data. Software sebagai penghubung antara manusia sebagai pengguna dengan perangkat keras komputer, berfungsi menerjemahkan bahasa manusia ke dalam bahasa mesin sehingga perangkat keras komputer memahami keinginan pengguna dan menjalankan instruksi yang diberikan dan selanjutnya memberikan hasil yang diinginkan oleh manusia tersebut.
Perangkat lunak komputer berfungsi untuk :
  1. Mengidentifikasi program
  2. Menyiapkan aplikasi program sehingga tata kerja seluruh perangkat komputer terkontrol.
  3. Mengatur dan membuat pekerjaan lebih efisien.
Macam-macam Perangkat Lunak :
  1. Sistem Operasi (Operating System),
  2. Program Aplikasi (Application Programs),
  3. Bahasa Pemrograman (Programming Language),
  4. Program Bantu (Utility)
1.  Sistem Operasi (Operating System)
Sistem Operasi yaitu program yang berfungsi untuk mengendalikan sistem kerja yang mendasar sehingga mengatur kerja media input, output, tabel pengkodean, memori, penjadwalan prosesor, dan lain-lain. Sistem operasi berfungsi sebagai penghubung antara manusia dengan perangkat keras dan perangkat lunak yang akan digunakan. Adapun fungsi utama sistem operasi adalah :
  • Menyimpan program dan aksesnya
  • Membagi tugas di dalam CPU
  • Mengalokasikan tugas-tugas penting
  • Merekam sumber-sumber data
  • Mengatur memori sistem termasuk penyimpanan, menghapus dan mendapatkan data
  • Memeriksa kesalahan sistem
  • Multitugas pada OS/2″, Windows ’95″, Windows ’98″, Windows NT”, /2000/XP
  • Memelihara keamanan sistem,   khusus pada jaringan yang membutuhkan kata sandi (password) dan penggunaan ID
Contoh Sistem Operasi, misalnya : Disk operating System (DOS), Microsoft Windows, Linux, dan Unix.
2.  Program Aplikasi (Aplication Programs)
Program Aplikasi adalah  perangkat lunak yang dirancang khusus untuk kebutuhan tertentu, misalnya program  pengolah kata, mengelola lembar kerja, program presentasi, design grafis, dan lain-lain.
3. Bahasa Pemrograman (Programming Language)
Perangkat lunak bahasa yaitu program yang digunakan untuk menerjemahkan instruksi-instruksi yang ditulis dalam bahasa pemrograman ke bahasa mesin dengan aturan atau prosedur tertentu, agar diterima oleh komputer.
Ada 3 level bahasa pemrograman, yaitu :
  • Bahasa tingkat rendah (low level language)
Bahasa ini disebut juga bahasa mesin (assembler), dimana pengkodean bahasanya menggunakan kode angka 0 dan 1.
  • Bahasa tingkat tinggi (high level language)
Bahasa ini termasuk dalam bahasa pemrograman yang mudah dipelajari oleh pengguna komputer karena menggunakan bahasa Inggris. Contohnya : BASIC, COBOL, PASCAL, FORTRAN.
  • Bahasa generasi keempat (4 GL)
Bahasa pemrograman 4 GL (Fourth Generation Language) merupakan bahasa yang berorientasi   pada objek yang disebut Object Oriented Programming (OOP). Contoh software ini adalah : Visual Basic, Delphi, Visual C++
4. Program Bantu (Utility)
Perangkat Lunak merupakan perangkat lunak yang berfungsi sebagai aplikasi pembantu dalam kegiatan yang ada hubungannya dengan komputer, misalnya memformat disket, mengopi data, mengkompres file, dan lain-lain.
Contoh software ini diantaranya :
  • Norton Utility
  • Winzip
  • Norton Ghost
  • Antivirus

          11.6" (26.46cm) Acer TMB118-RN-C6WX FHD/N3450/4GB/500GB/Linux   
11.6

11.6" (26.46cm) Acer TMB118-RN-C6WX FHD/N3450/4GB/500GB/Linux
11.6" (26.46cm) Acer TMB118-RN-C6WX FHD/N3450/4GB/500GB/Linux


Preis: € 389,40
Weitere Informationen ...

Zum Produkt

          11.6" (26.46cm) Acer TMB118-RN-C2Z9 FHD/N3450/4GB/128GB SSD/Linux   
11.6

11.6" (26.46cm) Acer TMB118-RN-C2Z9 FHD/N3450/4GB/128GB SSD/Linux
11.6" (26.46cm) Acer TMB118-RN-C2Z9 FHD/N3450/4GB/128GB SSD/Linux


Preis: € 413,30
Weitere Informationen ...

Zum Produkt

          Libretro rilascia la prima versione alpha del core Dolphin per PC Windows e Linux   

Il team Libretro rilascia pubblicamente la prima release alpha del core Dolphin, il famoso emulatore capace di eseguire i giochi della Wii e del Dreamcast su PC Windows e Linux a 64 bit. L’emulatore come per ogni prima versione si dimostra altamente instabile anche se promettente, ci saranno ulteriori aggiornamenti e sviluppi già dai prossimi...

L'articolo Libretro rilascia la prima versione alpha del core Dolphin per PC Windows e Linux proviene da BiteYourConsole.


          Impresora Brother Pj763 A4 Mobile Printer 300 DPI USB Bluetooth    





General Tipo de impresora Impresora personal - térmica directa - monocromo Impresora Velocidad de impresión Hasta 8 ppm - B/W Tecnología de conectividad Inalámbrico Interfaz USB 2.0, Bluetooth 2.1 EDR Resolución máx. (blanco y negro) 300 x 300 ppp Simulación idioma EPSON ESC/P, EPSON ESC/P Raster Códigos de barras Código 93, Industrial 2 de 5, Código 39, Código GS1 DataBar, Código QR, MaxiCode, Código Aztec, UPC-A, UPC-E, MicroPDF417, Code 128, Postnet, PDF417, Data Matrix, ITF, código Micro QR, EAN-8, EAN-13, MSI, GS1-128 Memoria RAM RAM instalada (máx.) 32 MB Memoria Flash Memoria Flash 6 MB Tratamiento soporte Tipo de soporte Papel térmico Clase de tamaño de material A4 Tamaño máximo de material 216 x 5490 mm Tamaño mínimo de soporte (Personalizado) 105 mm x 25.4 mm Tamaño máximo de soporte (personalizado) 216 mm x 5490 mm Tamaño soporte A4 (210 x 297 mm) Alimentadores de medios 1 x manual - 1 hoja - 216 x 5490 mm Conexión de redes Conexión de redes Servidor de impresión Expansión / Conectividad Conexiones 1 x USB 2.0 - mini-USB tipo B de 4 patillas Diverso Accesorios incluidos Limpiador de cabezales Cables incluidos 1 x cable USB Compatible with Windows 7 Aplicaciones y dispositivos "Compatible with Windows 7" llevan aseguramiento de Microsoft que estos productos fueron sometidos a tests para compatibilidad y fiabilidad con 32-bit y 64-bit Windows 7. Software / Requisitos del sistema Sistema operativo requerido Linux, Microsoft Windows 7, Microsoft Windows Vista, Microsoft Windows Server 2008 R2, Microsoft Windows Server 2008, Windows 8, Microsoft Windows Server 2012, Android 2.3 o posterior, Apple MacOS X 10.8, Microsoft Windows Server 2012 R2, Windows 8.1, Apple MacOS X 10.9, Apple MacOS X 10.10, Windows 10 Garantía del fabricante Servicio y mantenimiento Garantía limitada - 1 año Dimensiones y peso Anchura 25.5 cm Profundidad 5.5 cm Altura 3 cm Peso 480 g

Precio: 529,50 € (Iva incluído)




          VGA ASUS GEFORCE GTX 1050TI 4GB OC EDITION   





ASUS NVIDIA GeForce GTX 1050 Ti, 4GB. Familia de procesadores de gráficos: NVIDIA, Procesador gráfico: GeForce GTX 1050 Ti, Máxima resolución: 7680 x 4320 Pixeles. Gráficos discretos memoria del adaptador: 4 GB, Tipo de memoria de adaptador gráfico: GDDR5, Ancho de datos: 128 bit. Tipo de interfaz: PCI Express 3.0. Tipo de enfriamiento: Activo. Suministro de energía al sistema mínimo: 300 W, Consumo energético: 75 W Familia de procesadores de gráficos: NVIDIA Procesador gráfico: GeForce GTX 1050 Ti Máxima resolución: 7680 x 4320 Pixeles CUDA: Si Frecuencia del procesador: 1290 MHz Soporte para proceso paralelo: No compatible Aumento de la velocidad de reloj del procesador: 1392 MHz Resolución (máxima digital): 7680 x 4320 Pixeles Núcleos CUDA: 768 Gráficos discretos memoria del adaptador: 4 GB Tipo de memoria de adaptador gráfico: GDDR5 Ancho de datos: 128 bit Velocidad de memoria del reloj: 7008 MHz Ancho de banda de memoria (max): 112 GB/s Tipo de interfaz: PCI Express 3.0 Número de puertos HDMI: 1 Cantidad de puertos DVI-D: 1 Cantidad de DisplayPorts: 1 Versión HDMI: 2.0 Versión de DisplayPort: 1.4 Versión DirectX: 12.0 Versión OpenGL: 4.5 Dual Link DVI: Si HDCP: Si NVIDIA G-SYNC: Si Tipo de enfriamiento: Activo Número de ranuras: 2 Suministro de energía al sistema mínimo: 300 W Consumo energético: 75 W Sistema operativo Windows soportado: Windows 10 Education,Windows 10 Education x64,Windows 10 Enterprise,Windows 10 Enterprise x64,Windows 10 Home,Windows 10 Home x64,Windows 10 Pro,Windows 10 Pro x64,Windows 7 Enterprise,Windows 7 Enterprise x64,Windows 7 Home Basic,Windows 7 Home Basic x64,Windows 7 Home Premium,Windows 7 Home Premium x64,Windows 7 Professional,Windows 7 Professional x64,Windows 7 Starter,Windows 7 Starter x64,Windows 7 Ultimate,Windows 7 Ultimate x64,Windows 8,Windows 8 Enterprise,Windows 8 Enterprise x64,Windows 8 Pro,Windows 8 Pro x64,Windows 8 x64,Windows 8.1,Windows 8.1 Enterprise,Windows 8.1 Enterprise x64,Windows 8.1 Pro,Windows 8.1 Pro x64,Windows 8.1 x64 Sistema operativo Linux soportado: Si Profundidad: 212 mm Altura: 111 mm Ancho: 38 mm Software incluido: ASUS GPU Tweak II Source data-sheet: ICEcat.biz

Precio: 171,50 € (Iva incluído)




          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Сив
          Comentário sobre Criando interfaces gráficas com Electron para Linux Embarcado por lauroG   
Acho que você está pesquisando um modo quiosque. Dá uma pesquisada nesse termo
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420X 2GB, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768, Intel HD Graphics 520, Ubuntu Linux 16.04, Grey
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Сив
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, Intel HD Graphics, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768, Intel HD Graphics 520, Ubuntu Linux 16.04, Grey
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 8192 MB DDR4, 1000GB HDD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i7-7500U 2.70 GHz up to 3.50 GHz, 4 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1920x1080 IPS матов, AMD Radeon R5 M420X 2GB , Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Сив
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420X 2GB, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 620, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i5-7200U 2.50 GHz up to 3.10 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768, Intel HD Graphics 520, Ubuntu Linux 16.04, Grey
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, Intel HD Graphics 520, Ubuntu Linux 16.04, Черен
          Dell Vostro 3568   
Dell Vostro 3568 с Intel Core i3-6006U 2.00 GHz, 3 MB cache, 4096MB DDR4, 1000GB HDD, 120GB SSD, 15.6" 1366x768 матов, AMD Radeon R5 M420 (2GB DDR3L), Ubuntu Linux 16.04, Черен
          Low Cost Cloud Hosting starting only at $1   
Register your domain at daddyhosts.com to avail Low cost Cloud Hosting for any platform including Linux, Windows, WordPress, Magento, Drupal or other

          shadowsocks 安装   

Install the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.

Debian

sudo apt-get install python-pip sudo pip install shadowsocks

Ubuntu

Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip pip install shadowsocks

Fedora/Centos

sudo yum install python-setuptools   or   sudo dnf install python-setuptools sudo easy_install pip sudo pip install shadowsocks

OpenSUSE

sudo zypper install python-pip sudo pip install shadowsocks

Archlinux

sudo pacman -S python-pip sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks Downloading shadowsocks-2.8.2.tar.gz Running setup.py (path:/tmp/pip-build-PQIgUg/shadowsocks/setup.py) egg_info for package shadowsocks  Installing collected packages: shadowsocks Running setup.py install for shadowsocks  Installing sslocal script to /usr/local/bin Installing ssserver script to /usr/local/bin Successfully installed shadowsocks Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

{
"server":"server-ip",
"server_port":8000,
"local_address": "127.0.0.1",
"local_port":1080,
"password":"your-password",
"timeout":600,
"method":"aes-256-cfb"
}

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:


● rc-local.service - /etc/rc.local 

Compatibility Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with SystemdInstall the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.

Debian

sudo apt-get install python-pip
sudo pip install shadowsocks

Ubuntu

Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip
pip install shadowsocks

Fedora/Centos

sudo yum install python-setuptools   or   sudo dnf install python-setuptools
sudo easy_install pip
sudo pip install shadowsocks

OpenSUSE

sudo zypper install python-pip
sudo pip install shadowsocks

Archlinux

sudo pacman -S python-pip
sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks
Downloading shadowsocks-2.8.2.tar.gz
Running setup.py (path:/tmp/pip-build-PQIgUg/shadowsocks/setup.py) egg_info for package shadowsocks

Installing collected packages: shadowsocks
Running setup.py install for shadowsocks

Installing sslocal script to /usr/local/bin
Installing ssserver script to /usr/local/bin
Successfully installed shadowsocks
Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal
sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

{
"server":"server-ip",
"server_port":8000,
"local_address": "127.0.0.1",
"local_port":1080,
"password":"your-password",
"timeout":600,
"method":"aes-256-cfb"
}

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:

● rc-local.service - /etc/rc.local Compatibility
Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with Systemd






abin 2016-05-13 22:56 发表评论

           Redis 代理服务Twemproxy    

1、twemproxy explore

      当我们有大量 Redis 或 Memcached 的时候,通常只能通过客户端的一些数据分配算法(比如一致性哈希),来实现集群存储的特性。虽然Redis 2.6版本已经发布Redis Cluster,但还不是很成熟适用正式生产环境。 Redis 的 Cluster 方案还没有正式推出之前,我们通过 Proxy 的方式来实现集群存储

       Twitter,世界最大的Redis集群之一部署在Twitter用于为用户提供时间轴数据。Twitter Open Source部门提供了Twemproxy。

     Twemproxy,也叫nutcraker。是一个twtter开源的一个redis和memcache代理服务器。 redis作为一个高效的缓存服务器,非常具有应用价值。但是当使用比较多的时候,就希望可以通过某种方式 统一进行管理。避免每个应用每个客户端管理连接的松散性。同时在一定程度上变得可以控制。

      Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议:

     它全部用C写成,使用Apache 2.0 License授权。项目在Linux上可以工作,而在OSX上无法编译,因为它依赖了epoll API.

      Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。 

2、twemproxy特性:

    • 支持失败节点自动删除

      • 可以设置重新连接该节点的时间
      • 可以设置连接多少次之后删除该节点
      • 该方式适合作为cache存储
    • 支持设置HashTag

      • 通过HashTag可以自己设定将两个KEYhash到同一个实例上去。
    • 减少与redis的直接连接数

      • 保持与redis的长连接
      • 可设置代理与后台每个redis连接的数目
    • 自动分片到后端多个redis实例上

      • 多种hash算法:能够使用不同的策略和散列函数支持一致性hash。
      • 可以设置后端实例的权重
    • 避免单点问题

      • 可以平行部署多个代理层.client自动选择可用的一个
    • 支持redis pipelining request

           支持请求的流式与批处理,降低来回的消耗

    • 支持状态监控

      • 可设置状态监控ip和端口,访问ip和端口可以得到一个json格式的状态信息串
      • 可设置监控信息刷新间隔时间
    • 高吞吐量

      • 连接复用,内存复用。
      • 将多个连接请求,组成reids pipelining统一向redis请求。

     另外可以修改redis的源代码,抽取出redis中的前半部分,作为一个中间代理层。最终都是通过linux下的epoll 事件机制提高并发效率,其中nutcraker本身也是使用epoll的事件机制。并且在性能测试上的表现非常出色。

3、twemproxy问题与不足


Twemproxy 由于其自身原理限制,有一些不足之处,如: 
  • 不支持针对多个值的操作,比如取sets的子交并补等(MGET 和 DEL 除外)
  • 不支持Redis的事务操作
  • 出错提示还不够完善
  • 也不支持select操作

4、安装与配置 

具体的安装步骤可用查看github:https://github.com/twitter/twemproxy
Twemproxy 的安装,主要命令如下: 
apt-get install automake  
apt-get install libtool  
git clone git://github.com/twitter/twemproxy.git  
cd twemproxy  
autoreconf -fvi  
./configure --enable-debug=log  
make  
src/nutcracker -h

通过上面的命令就算安装好了,然后是具体的配置,下面是一个典型的配置 
    redis1:  
      listen: 127.0.0.1:6379 #使用哪个端口启动Twemproxy  
      redis: true #是否是Redis的proxy  
      hash: fnv1a_64 #指定具体的hash函数  
      distribution: ketama #具体的hash算法  
      auto_eject_hosts: true #是否在结点无法响应的时候临时摘除结点  
      timeout: 400 #超时时间(毫秒)  
      server_retry_timeout: 2000 #重试的时间(毫秒)  
      server_failure_limit: 1 #结点故障多少次就算摘除掉  
      servers: #下面表示所有的Redis节点(IP:端口号:权重)  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1  
      
    redis2:  
      listen: 0.0.0.0:10000  
      redis: true  
      hash: fnv1a_64  
      distribution: ketama  
      auto_eject_hosts: false  
      timeout: 400  
      servers:  
       - 127.0.0.1:6379:1  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1 

你可以同时开启多个 Twemproxy 实例,它们都可以进行读写,这样你的应用程序就可以完全避免所谓的单点故障。


http://blog.csdn.net/hguisu/article/details/9174459/


abin 2015-11-03 19:30 发表评论

          Deploying Highly Available Virtual Interfaces With Keepalived   

Linux is a powerhouse when it comes to networking, and provides a full featured and high performance network stack. When combined with web front-ends such asHAProxylighttpdNginxApache or your favorite application server, Linux is a killer platform for hosting web applications. Keeping these applications up and operational can sometimes be a challenge, especially in this age of horizontally scaled infrastructure and commodity hardware. But don't fret, since there are a number of technologies that can assist with making your applications and network infrastructure fault tolerant.

One of these technologies, keepalived, provides interface failover and the ability to perform application-layer health checks. When these capabilities are combined with the Linux Virtual Server (LVS) project, a fault in an application will be detected by keepalived, and the virtual interfaces that are accessed by clients can be migrated to another available node. This article will provide an introduction to keepalived, and will show how to configure interface failover between two or more nodes. Additionally, the article will show how to debug problems with keepalived and VRRP.

What Is Keepalived?


The keepalived project provides a keepalive facility for Linux servers. This keepalive facility consists of a VRRP implementation to manage virtual routers (aka virtual interfaces), and a health check facility to determine if a service (web server, samba server, etc.) is up and operational. If a service fails a configurable number of health checks, keepalived will fail a virtual router over to a secondary node. While useful in its own right, keepalived really shines when combined with the Linux Virtual Server project. This article will focus on keepalived, and a future article will show how to integrate the two to create a fault tolerant load-balancer.

Installing KeepAlived From Source Code


Before we dive into configuring keepalived, we need to install it. Keepalived is distributed as source code, and is available in several package repositories. To install from source code, you can execute wget or curl to retrieve the source, and then run "configure", "make" and "make install" compile and install the software:

$ wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz  $ tar xfvz keepalived-1.1.17.tar.gz   $ cd keepalived-1.1.17  $ ./configure --prefix=/usr/local  $ make && make install 

In the example above, the keepalived daemon will be compiled and installed as /usr/local/sbin/keepalived.

Configuring KeepAlived


The keepalived daemon is configured through a text configuration file, typically named keepalived.conf. This file contains one or more configuration stanzas, which control notification settings, the virtual interfaces to manage, and the health checks to use to test the services that rely on the virtual interfaces. Here is a sample annotated configuration that defines two virtual IP addresses to manage, and the individuals to contact when a state transition or fault occurs:

# Define global configuration directives global_defs {     # Send an e-mail to each of the following     # addresses when a failure occurs    notification_email {        matty@prefetch.net        operations@prefetch.net    }    # The address to use in the From: header    notification_email_from root@VRRP-director1.prefetch.net     # The SMTP server to route mail through    smtp_server mail.prefetch.net     # How long to wait for the mail server to respond    smtp_connect_timeout 30     # A descriptive name describing the router    router_id VRRP-director1 }  # Create a VRRP instance  VRRP_instance VRRP_ROUTER1 {      # The initial state to transition to. This option isn't     # really all that valuable, since an election will occur     # and the host with the highest priority will become     # the master. The priority is controlled with the priority     # configuration directive.     state MASTER      # The interface keepalived will manage     interface br0      # The virtual router id number to assign the routers to     virtual_router_id 100      # The priority to assign to this device. This controls     # who will become the MASTER and BACKUP for a given     # VRRP instance.     priority 100      # How many seconds to wait until a gratuitous arp is sent     garp_master_delay 2      # How often to send out VRRP advertisements     advert_int 1      # Execute a notification script when a host transitions to     # MASTER or BACKUP, or when a fault occurs. The arguments     # passed to the script are:     #  $1 - "GROUP"|"INSTANCE"     #  $2 = name of group or instance     #  $3 = target state of transition     # Sample: VRRP-notification.sh VRRP_ROUTER1 BACKUP 100     notify "/usr/local/bin/VRRP-notification.sh"      # Send an SMTP alert during a state transition     smtp_alert      # Authenticate the remote endpoints via a simple      # username/password combination     authentication {         auth_type PASS         auth_pass 192837465     }     # The virtual IP addresses to float between nodes. The     # label statement can be used to bring an interface      # online to represent the virtual IP.     virtual_ipaddress {         192.168.1.100 label br0:100         192.168.1.101 label br0:101     } } 

The configuration file listed above is self explanatory, so I won't go over each directive in detail. I will point out a couple of items:

  • Each host is referred to as a director in the documentation, and each director can be responsible for one or more VRRP instances
  • Each director will need its own copy of the configuration file, and the router_id, priority, etc. should be adjusted to reflect the nodes name and priority relative to other nodes
  • To force a specific node to master a virtual address, make sure the director's priority is higher than the other virtual routers
  • If you have multiple VRRP instances that need to failover together, you will need to add each instance to a VRRP_sync_group
  • The notification script can be used to generate custom syslog messages, or to invoke some custom logic (e.g., restart an app) when a state transition or fault occurs
  • The keepalived package comes with numerous configuration examples, which show how to configure numerous aspects of the server

Starting Keepalived


Keepalived can be executed from an RC script, or started from the command line. The following example will start keepalived using the configuration file /usr/local/etc/keepalived.conf:

$ keepalived -f /usr/local/etc/keepalived.conf 

If you need to debug keepalived issues, you can run the daemon with the "--dont-fork", "--log-console" and "--log-detail" options:

$ keepalived -f /usr/local/etc/keepalived.conf --dont-fork --log-console --log-detail 

These options will stop keepalived from fork'ing, and will provide additional logging data. Using these options is especially useful when you are testing out new configuration directives, or debugging an issue with an existing configuration file.

Locating The Router That is Managing A Virtual IP


To see which director is currently the master for a given virtual interface, you can check the output from the ip utility:

VRRP-director1$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.6/24 brd 192.168.1.255 scope global br0     inet 192.168.1.100/32 scope global br0:100     inet 192.168.1.101/32 scope global br0:101     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever  VRRP-director2$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.7/24 brd 192.168.1.255 scope global br0     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever 

In the output above, we can see that the virtual interfaces 192.168.1.100 and 192.168.1.101 are currently active on VRRP-director1.

Troubleshooting Keepalived And VRRP


The keepalived daemon will log to syslog by default. Log entries will range from entries that show when the keepalive daemon started, to entries that show state transitions. Here are a few sample entries that show keepalived starting up, and the node transitioning a VRRP instance to the MASTER state:

Jul  3 16:29:56 disarm Keepalived: Starting Keepalived v1.1.17 (07/03,2009) Jul  3 16:29:56 disarm Keepalived: Starting VRRP child process, pid=1889 Jul  3 16:29:56 disarm Keepalived_VRRP: Using MII-BMSR NIC polling thread... Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink reflector Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink command channel Jul  3 16:29:56 disarm Keepalived_VRRP: Registering gratutious ARP shared channel Jul  3 16:29:56 disarm Keepalived_VRRP: Opening file '/usr/local/etc/keepalived.conf'. Jul  3 16:29:56 disarm Keepalived_VRRP: Configuration is using : 62990 Bytes Jul  3 16:29:57 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Transition to MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Entering MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: Netlink: skipping nl_cmd msg... 

If you are unable to determine the source of a problem with the system logs, you can use tcpdump to display the VRRP advertisements that are sent on the local network. Advertisements are sent to a reserved VRRP multicast address (224.0.0.18), so the following filter can be used to display all VRRP traffic that is visible on the interface passed to the "-i" option:

$ tcpdump -vvv -n -i br0 host 224.0.0.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 96 bytes  10:18:23.621512 IP (tos 0x0, ttl 255, id 102, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"  10:18:25.621977 IP (tos 0x0, ttl 255, id 103, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"                          ......... 

The output contains several pieces of data that be useful for debugging problems:

authtype - the type of authentication in use (authentication configuration directive) vrid - the virtual router id (virtual_router_id configuration directive) prio - the priority of the device (priority configuration directive) intvl - how often to send out advertisements (advert_int configuration directive) auth - the authentication token sent (auth_pass configuration directive) 

Conclusion


In this article I described how to set up a host to use the keepalived daemon, and provided a sample configuration file that can be used to failover virtual interfaces between servers. Keepalived has a slew of options not covered here, and I will refer you to the keepalived source code and documentation for additional details



abin 2015-11-01 21:06 发表评论

          USB-s külső Virtuális 7.1 3D hangkártya Adapter - Jelenlegi ára: 663 Ft   
Features: Brand new and high quality.
Compliant with USB2. 0 Full -speed ( 12Mbps ) specification
Compliant with USB Audio Device Class Specification1. 0
Compliant with USB HID Class Specification1. 1
USB bus-powered mode, no external power required.
Connectors: USB Type-A, Stereo output jack, Mono microphone-input jack.
Functional keys: Microphone-Mute, headset-Mute, Volume-Up, Volume-Down.
No driver required for Windows 98SE / ME / 2000 / XP / Server 2003 / Vista . Linux . Mac05.
Package Included: 1 x USB 7. 1CH Sound Card.
NO Retail Box. Packed Safely in Bubble Bag.
P041004
Vásárlással kapcsolatos fontos információk:
Köszöntjük oldalunkon!
Az adásvétel megkönnyítése érdekében, kérjük olvassa el vásárlási feltételeinket, melyeket rendelésével automatikusan elfogad.
Kedvezmény: Amennyiben termékeink közül minimum 50 db-ot vásárol, kedvezményt biztosítunk. Kérjük igényelje a kedvezményt ügyfélszolgálatunktól.
US hálózati csatlakozós termékeink esetén, külön rendelhető a termékeink között található US-EU átalakító adapter.
Fontos! Ha a leírásban NEM szerepel, hogy ? We dont offer color/pattern/size choice? (szín/minta/méret nem választható), akkor rendeléskor kérjük mindenképp írja bele a megjegyzés rovatba a kiválasztott színt/mintát/méretet, ellenkező esetben kollégáink véletlenszerűen postázzák. Ez esetben utólagos reklamációt nem fogadunk el.
Ahol a ? We dont offer color/pattern/size choice? kijelentés szerepel, sajnos nincs lehetőség szín/minta/méret kiválasztására. Ilyenkor kollégáink véletlenszerűen küldik a termékeket.
Kommunikáció: minden esetben kizárólag email-ben, mert így visszakövethetőek a beszélgetések.
Hibás termék: visszautaljuk a vételárat vagy újrapostázzuk a terméket megállapodástól függően, miután visszapostázta a megadott címre.
Visszautalás: a vételárat visszautaljuk, vagy a terméket újraküldjük ha nem érkezik meg a termék.
Ez esetben kérjük jelezze email-en keresztül, hogy megoldást találhassunk a problémára!
Garancia: 3 hónap! Amennyiben valóban hibás a termék, kérjük vegye fel velünk a kapcsolatot és kicseréljük vagy visszavásároljuk a terméket megegyezéstől függően.
Számlázás: Az elektronikus számlát (pdf. formátumú) Angliában regisztrált cégünk állítja ki, az ÁFA nem kimutatható, az utalás magyar céges számlánkra történik.
A szállítási idő: az összeg átutalása után 9-12 munkanap, de a postától függően előfordulhat a 25-35 munkanap is! A posta szállítási idejéért cégünk nem tud felelősséget vállalni, az említett szállítási idő tájékoztató jellegű!
Nagyon fontos! Kérjük ne vásároljanak akkor, ha nem tudják kivárni az esetleges 35 munkanap szállítási időt!
strong>Postázás: Termékeinket külföldről postázzuk.
Nagy raktárkészletünk miatt előfordulhat, hogy egy-két termék átmenetileg vagy véglegesen elfogy raktárunkból, erről mindenképp időben értesítjük és megfelelő megoldást kínálunk.
Utalás: Kizárólag átutalást (házibank, netbank) fogadunk el (bankszámláról bankszámlára),   Banki/Postai készpénz befizetést/Rózsaszín csekket ill. egyéb NEM!
Átutalásnál a rendelésszámot feltétlenül adja meg a közlemény rovatba, ellenkező esetben előfordulhat, hogy nem tudjuk visszakeresni a rendelését. Ebben az esetben nyilvánvalóan nem tudjuk a terméket postázni ill. Önt sem tudjuk értesíteni, hiszen nincs kiindulópontunk!
Fizetés/szállítás:
-2000Ft felett (postaköltséggel együtt) CSAK es KIZÁRÓLAG ajánlottan postázzuk a terméket az alábbiak szerint:
-Ajánlott posta esetén az első termékre a posta 890Ft , minden további 250 Ft/db.
- Sima Levélként 2000Ft alatt: az első termékre a posta 250Ft, minden további termék posta díja 250Ft/db.
Átvétel: azoknak a vásárlóknak akik nem veszik át a rendelt terméket a postától és visszaküldésre kerül a termék cégünkhöz, a postaköltség újbóli megfizetésével tudjuk csak újraküldeni, illetve amennyiben az összeget kéri vissza, a termékek árát tudjuk csak visszautalni, postaköltség nélkül. A termék átvétele az Ön felelőssége! Amennyiben a Mi hibánkból nem tudja átvenni, pl téves címzés miatt, így a postaköltség minket terhel.
Amennyiben a megrendelést követő 24 órán belül nem kap emailt tőlünk, ez azt jelenti, hogy az email cím (freemail és citromail esetén főleg) visszadobta a küldött email-t. Ilyenkor küldjön üzenetet egy másik e-mail címről.
Kellemes Vásárlást Kívánunk!
USB-s külső Virtuális 7.1  3D  hangkártya Adapter
Jelenlegi ára: 663 Ft
Az aukció vége: 2017-07-01 01:22
          USB 2.0 5H V2 CH 7.1 hangkártya Notebook - Jelenlegi ára: 779 Ft   
Features: Brand new and high quality.
The USB Virtual 7. 1 Channel Sound Adapter is a highly flexible audio interface which can be used either with Desktop or Notebook systems.
No driver required, just plug and play for instant audio playback, also compatible with all major operation systems.
Complaint with USB 2. 0 Full-Speed(12Mbps)Specification.
Complaint with USB Audio Device Class Specification 1. 0.
Complaint with USB HID Class Specification 1. 1.
USB bus-powered mode, no external power required.
Connectors: USB Type-A, Stereo output jack, Mono microphone-input jack.
Functional keys: Microphone-Mute, Speaker-Mute, Volume-Up, Volume-Down.
Size: 32. 5cm x 0. 5cm - 12. 8inch x 0. 2inch.
System Requirement: Desktop or notebook PC with a USB port.
Windows 98SE/ME/2000/XP/Server 2003/Vista/Windows 7, Linux, MacOS 10 or higher.
Stereo active speaker or stereo earphone.
Package Included: 1 x New USB 7. 1 Channel Sound Adapter Converter
NO Retail Box. Packed Safely in Bubble Bag.
P034003
Vásárlással kapcsolatos fontos információk:
Köszöntjük oldalunkon!
Az adásvétel megkönnyítése érdekében, kérjük olvassa el vásárlási feltételeinket, melyeket rendelésével automatikusan elfogad.
Kedvezmény: Amennyiben termékeink közül minimum 50 db-ot vásárol, kedvezményt biztosítunk. Kérjük igényelje a kedvezményt ügyfélszolgálatunktól.
US hálózati csatlakozós termékeink esetén, külön rendelhető a termékeink között található US-EU átalakító adapter.
Fontos! Ha a leírásban NEM szerepel, hogy ? We dont offer color/pattern/size choice? (szín/minta/méret nem választható), akkor rendeléskor kérjük mindenképp írja bele a megjegyzés rovatba a kiválasztott színt/mintát/méretet, ellenkező esetben kollégáink véletlenszerűen postázzák. Ez esetben utólagos reklamációt nem fogadunk el.
Ahol a ? We dont offer color/pattern/size choice? kijelentés szerepel, sajnos nincs lehetőség szín/minta/méret kiválasztására. Ilyenkor kollégáink véletlenszerűen küldik a termékeket.
Kommunikáció: minden esetben kizárólag email-ben, mert így visszakövethetőek a beszélgetések.
Hibás termék: visszautaljuk a vételárat vagy újrapostázzuk a terméket megállapodástól függően, miután visszapostázta a megadott címre.
Visszautalás: a vételárat visszautaljuk, vagy a terméket újraküldjük ha nem érkezik meg a termék.
Ez esetben kérjük jelezze email-en keresztül, hogy megoldást találhassunk a problémára!
Garancia: 3 hónap! Amennyiben valóban hibás a termék, kérjük vegye fel velünk a kapcsolatot és kicseréljük vagy visszavásároljuk a terméket megegyezéstől függően.
Számlázás: Az elektronikus számlát (pdf. formátumú) Angliában regisztrált cégünk állítja ki, az ÁFA nem kimutatható, az utalás magyar céges számlánkra történik.
A szállítási idő: az összeg átutalása után 9-12 munkanap, de a postától függően előfordulhat a 25-35 munkanap is! A posta szállítási idejéért cégünk nem tud felelősséget vállalni, az említett szállítási idő tájékoztató jellegű!
Nagyon fontos! Kérjük ne vásároljanak akkor, ha nem tudják kivárni az esetleges 35 munkanap szállítási időt!
strong>Postázás: Termékeinket külföldről postázzuk.
Nagy raktárkészletünk miatt előfordulhat, hogy egy-két termék átmenetileg vagy véglegesen elfogy raktárunkból, erről mindenképp időben értesítjük és megfelelő megoldást kínálunk.
Utalás: Kizárólag átutalást (házibank, netbank) fogadunk el (bankszámláról bankszámlára),   Banki/Postai készpénz befizetést/Rózsaszín csekket ill. egyéb NEM!
Átutalásnál a rendelésszámot feltétlenül adja meg a közlemény rovatba, ellenkező esetben előfordulhat, hogy nem tudjuk visszakeresni a rendelését. Ebben az esetben nyilvánvalóan nem tudjuk a terméket postázni ill. Önt sem tudjuk értesíteni, hiszen nincs kiindulópontunk!
Fizetés/szállítás:
-2000Ft felett (postaköltséggel együtt) CSAK es KIZÁRÓLAG ajánlottan postázzuk a terméket az alábbiak szerint:
-Ajánlott posta esetén az első termékre a posta 890Ft , minden további 250 Ft/db.
- Sima Levélként 2000Ft alatt: az első termékre a posta 250Ft, minden további termék posta díja 250Ft/db.
Átvétel: azoknak a vásárlóknak akik nem veszik át a rendelt terméket a postától és visszaküldésre kerül a termék cégünkhöz, a postaköltség újbóli megfizetésével tudjuk csak újraküldeni, illetve amennyiben az összeget kéri vissza, a termékek árát tudjuk csak visszautalni, postaköltség nélkül. A termék átvétele az Ön felelőssége! Amennyiben a Mi hibánkból nem tudja átvenni, pl téves címzés miatt, így a postaköltség minket terhel.
Amennyiben a megrendelést követő 24 órán belül nem kap emailt tőlünk, ez azt jelenti, hogy az email cím (freemail és citromail esetén főleg) visszadobta a küldött email-t. Ilyenkor küldjön üzenetet egy másik e-mail címről.
Kellemes Vásárlást Kívánunk!
USB 2.0 5H V2 CH 7.1 hangkártya Notebook
Jelenlegi ára: 779 Ft
Az aukció vége: 2017-07-01 01:23
          Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) / HM Revenue and Customs / Newcastle, Staffordshire, United Kingdom   
HM Revenue and Customs/Newcastle, Staffordshire, United Kingdom

Technical DBA Team Manager (Oracle, SQL, Manager)

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our CBP Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Newcastle.

About the Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) role

This is a hands on technical management role responsible for the availability and performance of production databases within the agreed KPI's and ITSLA.

You will be managing a team of DBA's supporting both critical production databases for a high profile HMRC service as well as engaging in the end to end

project delivery life-cycle.

You will help ensure the effective operations of database platforms, and proper integration with dependent services through effective staffing, monitoring, metrics, and operational excellence.

You must possess strong leadership, be detail-oriented, a quick decision maker, and have a passion for getting things right.

You will excel at managing multiple projects and tasks, and cross-functional communication within internal Delivery Groups and external suppliers in addition to managing teams during high pressure problem resolution.

You will possess strong written and verbal communication skills and be comfortable handling internal stakeholders and external vendor communications.

The ideal candidate will have experience supporting large-scale, massively concurrent, highly available database systems.

You will lead and performance manage a new team of talented and dedicated DBA's focusing on the health of the database tier through the complete system lifecycle.

You will support teams through scheduled maintenance and release deployment activities after hours.

You will share domain and technical expertise, providing technical mentorship and support the development of a virtual team community in database administration.

Your experience with Oracle RDBMS will be critical to your success; however, you should be prepared and knowledgeable and willing to innovate to explore new technology offerings that will help HMRC to adopt any future technology platform pertinent to the systems being supported.

Other information for the Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) role

Essential:

• Educated to degree level

• 7+ years of industry experience

• 4+ years of experience leading DBAs

• Relevant hands-on technical management experience of DBA support teams and skills - troubleshoot, debug, evaluate, and resolve database software defects.

• Strong technical background on DBA domain

• Excellent communication skills, written and oral communication skills;

• Well versed with the ITIL framework

• People and performance management

• Ability to take the initiative, set schedules and prioritise independently

Desirable:

• Oracle Certified Practitioner

• Management level certification

• Project management experience (involving database maintenance project planning,

capacity planning, knowledge transfer plans)

• Agile Development framework and DevOps

• Good understanding of the underpinning Oracle technology stack:

• Oracle GoldenGate

• Oracle RAC One Node

• Oracle Database 12c

• Oracle DBFS

• Oracle Data Guard

• Oracle Enterprise Linux

• Oracle Enterprise Manager

Working Pattern:

It should be noted that this role will require the successful candidate to provide support 24/7 outside of normal working hours as part of an on-call rota.

Must pass basic security checks and undertake National Security Clearance - Level 2- if security clearance at this level is not already in place

CV's should clearly demonstrate how the candidate meets the essential criteria and qualifications stated above.

The post is based in Longbenton with occasional travel/ to other HMRC and Government departments/locations and supplier offices.

To apply for the role of Technical DBA Team Manager (ITIL, Oracle, Linux, Manager), please click 'apply now'.

Employment Type: Permanent

Pay: 57,000 to 63,000 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £57,000 - £63,000

Apply To Job
          Linux/Windows Software Support Engineer - Weymouth - SC Cleared / Square One Resources / Weymouth, Dorset, United Kingdom   
Square One Resources/Weymouth, Dorset, United Kingdom

Linux/Windows Software Support Engineer - Weymouth - SC Cleared

Square One are looking for a Linux/Windows Software Support Engineer to come on board for a 3 month contract based in Weymouth.

The Purpose of the Role:

Support the software development teams for the configuration, customization and administration of the Operating Systems (Linux/Windows) and virtualisation environment of the development, test and target platforms in use for delivering large solutions. As a secondary purpose, to support the teams in designing and programming functionalities for the final software solution during administration activities down time.

Essential

Educated to Degree level or equivalent in software, computer science or software related discipline

Certified Linux (RHEL) administrator with at least 3 years' experience

Certified Windows Administrator with at least 3 years' experience

A minimum of 3 years' experience in the following areas:

o Virtualisation platforms using hypervisors/VMs in a multi-OS configuration (Linux/Windows)

o Fine customisation of VMs in a complex networking environment

o PC-Over-IP in virtualised environment

o Network and switch configuration/administration

Experience with Packer and Vagrant.

An understanding of software design methodologies (UML) and programming languages (C++ or Java)

Self-starter and able to learn on the fly

The successful candidate must be capable of achieving security (SC) clearance as a minimum

Desirable

Exposure to Data Distribution Service (DDS)

Commercial experience of UML and OO design methodologies

Commercial experience of Real Time designs, programming concepts and design patterns.

Proficiency in high level programming language (C++ or Java).

Experience of specification development, verification and validation.

Experience of line management or mentoring

Background in defence

Undertake all administration activities for the development, test and target environments of large projects covering:

o Operation Systems administration (Linux/Windows)

o Virtualisation platform configuration (hypervisor, Virtual Machine )

o Networking aspects (Switches, drivers, TCP or UDP IP )

o PC-Over-IP configuration

Contribute to the definition of the system network and topology configuration in support of the software architect.

When required on projects, design, code and unit test software in accordance with the company's procedure and project specific requirements.

Estimate the hours and duration required for own tasks.

Support planning input to project schedules and deliver own work commensurate with those plans.

Contribute information to project reports.

Share Linux/Virtualisation/Networking expertise with the rest of the development team.

This is a 3 month contract based in Weymouth starting immediately.

Linux/Windows Software Support Engineer - Weymouth - SC Cleared

Employment Type: Contract
Duration: 3 months
Other Pay Info: Market rates

Apply To Job
          Oracle DBA (Support, Oracle, Linux) / HM Revenue and Customs / Newcastle, Staffordshire, United Kingdom   
HM Revenue and Customs/Newcastle, Staffordshire, United Kingdom

Oracle Database Administrator (Support, Oracle, Linux)

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our CBP Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Newcastle.

About the Oracle Database Administrator (Support, Oracle, Linux) role

The database administrator will be responsible for the implementation, configuration, maintenance, and performance of critical Oracle systems to ensure the availability and consistent performance of our corporate applications.

Working as part of a team, the successful candidate will support the development and sustainment of the databases, ensuring operational readiness (security, health and performance), executing data loads, performing monitoring and support of both development and production support teams.

This is a technical role requiring solid technical skills, excellent written and interpersonal skills, the ability to work effectively both independently and within a team environment. Sharing knowledge/skills and developing productive working relationships, as well as being able to use own initiative. A flexible team player with a pro-active outlook to delivery and the rapidly changing working environment.

Responsibilities of the Oracle Database Administrator (Support, Oracle, Linux)

Manage databases through multiple product lifecycle environments, from development to mission-critical production systems to decommissioning on both virtual and physical midrange systems

Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security.

Support development teams to ensure development and implementation support efforts meet integration and performance expectations.

Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end.

Refine and automate regular processes, track issues, and document changes.

Perform scheduled maintenance and support release deployment activities after hours.

Other Information about the Oracle Database Administrator (Support, Oracle, Linux) role

Essential:

2 years+ experience in database management and performance tuning and optimisation, using native monitoring, maintenance and troubleshooting tools, backup restores and recovery models on virtual machines and physical midrange systems.

A good working knowledge of Oracle Enterprise Linux operating systems running on Oracle 12c.

Experience in building virtual multi-tenant databases within on a Linux virtual platform to include database upgrade and regular patching and maintenance.

A good knowledge of Oracle GRID Infrastructure plus Oracle Enterprise Manager(OEM).

Has undertaken and can demonstrate appropriate Oracle technical training for the role.

Desirable:

• BSC degree in computer science or equivalent.

• Oracle GoldenGate

• Oracle RAC One Node

• Oracle DBFS

• Oracle Data Guard

• Netbackup

Training for the desirable criteria will be provided for the right candidate who meets the essential criteria.

Working Pattern:

This post is full time however applicants whom would like to work an alternative working pattern - are welcome to apply. All requests will be considered, although the preferred working pattern may not be available.

It should be noted that this role will require the successful candidate to provide support 24/7 outside of normal working hours as part of an on-call rota.

Must pass basic security checks and undertake National Security Clearance - Level 2- if security clearance at this level is not already in place

CV's should clearly demonstrate how the candidate meets the essential criteria and qualifications stated above.

Sift / Interviews

Applicants will be sifted based upon demonstration of the essential criteria.

The post is based in Newcastle with occasional travel/ to other HMRC and Government departments/locations and supplier offices.

To apply for the role of Oracle Database Administrator (Support, Oracle, Linux), please click 'apply now'.

Employment Type: Permanent

Pay: 37,537 to 41,488 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £37,537 - £41,488

Apply To Job
          Systems Engineer (Cloud, ITIL, AWS, Linux) / HM Revenue and Customs / Telford, Shropshire, United Kingdom   
HM Revenue and Customs/Telford, Shropshire, United Kingdom

Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux) - Cloud Delivery Group

Salary: Competitive

Location: Telford

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our Cloud Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Telford.

Background

This is an exciting opportunity to join HMRC's Cloud Delivery Group (CDG) where you will be working across one of the biggest IT estates in Europe and supporting a large scale and radical transformation that will have a profound impact for both the customers and the staff of HMRC. As part of the Development, Test and Operate (DTO) Directorate, the Cloud Development team is responsible for translating overarching IT strategy into the technical architecture for CDG. This is a unique opportunity for an experienced Technical Architect to work within HMRC's Cloud domain during a time of significant change and transformation as HMRC drives the focus of IT delivery away from product centric solutions and fully exploits the opportunities that Digital Services and Data Analytics can provide. This position will play a key role in supporting Delivery Groups with the creation and execution of technology roadmaps that will drive HMRC's hugely complex IT estate towards a smaller set of strategic systems whilst decommissioning a large proportion of the legacy.

Role Requirements for the Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux)

The Cloud Systems Engineering team are responsible for the development of Cloud tooling and environments for the Cloud Delivery Group that make up service offerings consumed both internally (within the boundaries of CDG) and externally (service offerings made available to the rest of the IT Department). We are therefore looking for a seasoned Systems Engineer with a solid background in Hosting and Cloud technology. Candidates should have experience working on large enterprise estates designing and implementing physical and virtual infrastructure, associated management and deployment tooling.

This is a dynamic and changing environment and so we're looking for someone who's up for working in an ever changing technology landscape that is centred on Cloud.

Accountabilities of the Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux):

• Design and develop tooling, products and solutions for the Cloud Delivery Group at the direction of the Product Owners.

• Liaise with CTO and Product Owners to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions;

• Review and ensure conformance of tooling test plans to meet expected quality standards;

• Work as part of a technical team in a collaborative and innovative way, developing CDG products and services;

• Be accountable for personal development and training.

Tasks:

• Work with Senior Systems Engineer to develop tooling design.

• Develop knowledge of cloud provider roadmaps and maintain proficiency in industry technologies and trends.

• Implement new capabilities into the CDG offerings and service catalogues.

• Advise on engineering standards, procedures, methods, tools and techniques.

• Contribute to reviews and audits of projects from an engineering perspective.

• Contribute to the assessment and validation of engineering risk.

• Engage in knowledge transfer across CDG

• Engage in continuous improvement to improve CDG performance.

• Conduct personal professional development to keep up to date on new technologies.

Essential Criteria of the Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux) role

You will need to demonstrate within your application the following essential experience -

• A good understanding and working knowledge of Public Cloud offerings (AWS, Azure etc.).

• The ability to script and automate all activities in Hyperscale Cloud.

• Solid experience of working with Linux and Microsoft Server Operating Systems.

• Domain and Administration technologies (Active Directory) and designs.

• Backup, Anti-Virus, Monitoring (ELK, Splunk, Grafana etc.).

• Demonstrate an ability to communicate across IT disciplines to get the best solution and ensure nothing gets overlooked that could jeopardise performance or the integrity of the existing IT estate.

• Able to work effectively in pressurised situations and can be relied upon to deliver, irrespective of circumstances.

• Able to work in highly ambiguous situations and without supervision

• The successful applicant for this role will need to be eligible for and willing to undergo SC clearance following appointment in to the post.

Desirable Experience for the Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux)

• A good understanding or working knowledge of Container technologies such as Docker.

• A good understanding or working knowledge of the following tools: Puppet, Ansible, Jenkins, and Terraform.

• Experience of working in an agile environment and experience with agile methodologies such as TDD, Scrum, Kanban.

• Solid experience of developing DNS across Cloud providers.

• Backup, Anti-Virus, Monitoring (ELK, Splunk, Grafana etc.) specific to Hyperscale Cloud.

• Experience or awareness of ITIL ways of working.

Key leadership behaviours of the Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux)

• Changing and Improving

• Leading and Communicating

• Delivering at Pace

To apply for the role of Cloud Systems Engineer (Design, Developer, ITIL, AWS, Linux), please click apply now button.

Employment Type: Permanent

Pay: 49,875 to 55,125 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £49,875 - £55,125

Apply To Job
          June 2017 App Service update   
This month we shipped improvements to the overview blade, a new unified create experience for both Windows and Linux based apps as well a new recommendation history UI.... Read more
          VOIP网关 迅时通信MX60-32S/16报价11800元   
 迅时通信MX60-32S/16,采用1U高19英寸宽标准机架式设计,在CPU+DSP的硬件架构中采用高性能400MHz ARM9芯片作为核心处理器引擎,内置稳定可靠的嵌入式Linux 操作系统,具有极强的大容量呼叫处理能力。
          Solving Nginx logging in 60 lines of Haskell   

Nginx is well-known for only logging to files and being unable to log to syslog out of the box.

There are a few ways around this, one that is often proposed is creating named pipes (or FIFOs) before starting up nginx. Pipes have the same properties than regular files in UNIX (to adhere to the important notion that everything is a file in UNIX), but they expect data written to them to be consumed by another process at some point. To compensate for the fact that consumers might sometimes be slower than producers they maintain a buffer of readily available data, with a hard maximum of 64k in Linux systems for instance.

Small digression: understanding linux pipes max buffer size

It can be a bit confusing to figure out what the exact size of FIFO buffer is in linux. Our first reflex will be to look at the output of ulimit

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 30
file size               (blocks, -f) unlimited
pending signals                 (-i) 63488
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 99
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63488
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Which seems to indicate that the available pipe size in bytes is 512 * 8, amounting to 4kb. Turns out, this is the maximum atomic size of a payload on a pipe, but the kernel reserves several buffers for each created pipe, with a hard limit set in https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/pipe_fs_i.h?id=refs/tags/v3.13-rc1#n4.

The limit turns out to be 4096 * 16, amounting to 64kb, still not much.

Pipe consumption strategies

Pipes are tricky beasts and will bite you if you try to consume them from syslog-ng or rsyslog without anything in between. First lets see what happens if you write on a pipe which has no consumer:

$ mkfifo foo
$ echo bar > foo

bash

That’s right, having no consumer on a pipe results in blocking writes which will not please nginx, or any other process which expects logging a line to a file to be a fast operation (and in many application will result in total lock-up).

Even though we can expect a syslog daemon to be mostly up all the time, it imposes huge availability constraints on a system daemon that can otherwise safely sustain short availability glitches.

A possible solution

What if instead of letting rsyslog do the work we wrapped the nginx process in with a small wrapper utility, responsible for pushing logs out to syslog. The utility would:

  • Clean up old pipes
  • Provision pipes
  • Set up a connection to syslog
  • Start nginx in the foreground, while watching pipes for incoming data

The only requirement with regard to nginx’s configuration is to start it in the foreground, which can be enabled with this single line in nginx.conf:

daemon off;

Wrapper behavior

We will assume that the wrapper utility receives a list of command line arguments corresponding to the pipes it has to open, if for instance we only log to /var/log/nginx/access.log and /var/log/nginx/error.log we could call our wrapper - let’s call it nginxpipe - this way:

nginxpipe nginx-access:/var/log/nginx/access.log nginx-error:/var/log/nginx/error.log

Since the wrapper would stay in the foreground to watch for its child nginx process, integration in init scripts has to account for it, for ubuntu’s upstart this translates to the following configuration in /etc/init/nginxpipe.conf:

respawn
exec nginxpipe nginx-access:/var/log/nginx/access.log nginx-error:/var/log/nginx/error.log

Building the wrapper

For once, the code I’ll show won’t be in clojure since it does not lend itself well to such tasks, being hindered by slow startup times and inability to easily call OS specific functions. Instead, this will be built in haskell which lends itself very well to system programming, much like go (another more-concise-than-c system programming language).

First, our main function:

main = do
  mainlog <- openlog "nginxpipe" [PID] DAEMON NOTICE
  updateGlobalLogger rootLoggerName (setHandlers [mainlog])
  updateGlobalLogger rootLoggerName (setLevel NOTICE)
  noticeM "nginxpipe" "starting up"
  args <- getArgs
  mk_pipes $ map get_logname args
  noticeM "nginxpipe" "starting nginx"
  ph <- runCommand "nginx"
  exit_code <- waitForProcess ph
  noticeM "nginxpipe" $ "nginx stopped with code: " ++ show exit_code

We start by creating a log handler, then using it as our only log destination throughout the program. We then call mk_pipes which will look on the given arguments and finally start the nginx process and wait for it to return.

The list of argument given to mk_pipes is slightly modified, it transforms the initial list consisting of

[ "nginx-access:/var/log/nginx/access.log", "nginx-error:/var/log/nginx/error.log"]

into a list of string-tuples:

[("nginx-access","/var/log/nginx/access.log"), ("nginx-error","/var/log/nginx/error.log")]

To create this modified list which just map our input list with a simple function:

is_colon x = x == ':'
get_logname path = (ltype, p) where (ltype, (_:p)) = break is_colon path

Next up is the pipe creation, since Haskell has no loop we use tail recursion to iterate on the list of tuples:

mk_pipes :: [(String,String)] -> IO ()
mk_pipes (pipe:pipes) = do
  mk_pipe pipe
  mk_pipes pipes
mk_pipes [] = return ()

The bulk of work happens in the mk_pipe function:

mk_pipe :: (String,String) -> IO ()
mk_pipe (ltype,path) = do
  safe_remove path
  createNamedPipe path 0644
  fd <- openFile path ReadMode
  hSetBuffering fd LineBuffering
  void $ forkIO $ forever $ do
    is_eof <- hIsEOF fd
    if is_eof then threadDelay 1000000 else get_line ltype fd

The intersting bit in that function is the last 3 lines, where we create a new “IO Thread” with forkIO inside which we loop forever waiting for input for at most 1 second, logging to syslog when new input comes in.

The two remaining functions get_line and safe_remove have very simple definitions, I intentionnaly left a small race-condition in safe_remove to make it more readable:

safe_remove path = do
  exists <- doesFileExist path
  when exists $ removeFile path

get_line ltype fd = do
  line <- hGetLine fd
  noticeM ltype line

I’m not diving into each line of the code, there is plenty of great litterature on haskell, I’d recommmend “real world haskell” as a great first book on the language.

I just wanted to show-case the fact that Haskell is a great alternative for building fast and lightweight system programs.

The awesome part: distribution!

The full source for this program is available at https://github.com/pyr/nginxpipe, it can be built in one of two ways:

  • Using the cabal dependency management system (which calls GHC)
  • With the GHC compiler directly

With cabal you would just run:

cabal install --prefix=/somewhere

Let’s look at the ouput:

$ ldd /somewhere/bin/nginxpipe 
linux-vdso.so.1 (0x00007fffe67fe000)
librt.so.1 => /usr/lib/librt.so.1 (0x00007fb8064d8000)
libutil.so.1 => /usr/lib/libutil.so.1 (0x00007fb8062d5000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fb8060d1000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fb805eb3000)
libgmp.so.10 => /usr/lib/libgmp.so.10 (0x00007fb805c3c000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007fb805939000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007fb805723000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fb805378000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb8066e0000)
$ du -sh /somewhere/bin/nginxpipe
1.9M /somewhere/bin/nginxpipe

That’s right, no crazy dependencies (for instance, this figures out the correct dependencies across archlinux, ubuntu and debian for me) and a smallish executable.

Obviously this is not a complete solution as-is, but quickly adding support for a real configuration file would not be a huge endeavour, where for instance an alternative command to nginx could be provided.

Hopefully this will help you consider haskell for your system programming needs in the future!


          The death of the configuration file   

Taking on a new platform design recently I thought it was interesting to see how things evolved in the past years and how we design and think about platform architecture.

So what do we do ?

As system developers, system administrators and system engineers, what do we do ?

  • We develop software
  • We design architectures
  • We configure systems

But it isn’t the purpose of our jobs, for most of us, our purpose is to generate business value. From a non technical perspective we generate business value by creating a system which renders one or many functions and provides insight into its operation.

And we do this by developing, logging, configuration and maintaining software across many machines.

When I started doing this - back when knowing how to write a sendmail configuration file could get you a paycheck - it all came down to setting up a few machines, a database server a web server a mail server, each logging locally and providing its own way of reporting metrics.

When designing custom software, you would provide reports over a local AF_UNIX socket, and configure your software by writing elegant parsers with yacc (or its GNU equivalent, bison).

When I joined the OpenBSD team, I did a lot of work on configuration files, ask any members of the team, the configuration files are a big concern, and careful attention is put into clean, human readable and writable syntax, additionally, all configuration files are expected to look and feel the same, for consistency.

It seems as though the current state of large applications now demands another way to interact with operating systems, and some tools are now leading the way.

So what has changed ?

While our mission is still the same from a non technical perspective, the technical landscape has evolved and went through several phases.

  1. The first era of repeatable architecture

    We first realized that as soon as several machines performed the same task the need for repeatable, coherent environments became essential. Typical environments used a combination of cfengine, NFS and mostly perl scripts to achieve these goals.

    Insight and reporting was then providing either by horrible proprietary kludges that I shall not name here, or emergent tools such as netsaint (now nagios), mrtg and the like.

  2. The XML mistake

    Around that time, we started hearing more and more about XML, then touted as the solution to almost every problem. The rationale was that XML was - somewhat - easy to parse, and would allow developers to develop configuration interfaces separately from the core functionality.

    While this was a noble goal, it was mostly a huge failure. Above all, it was a victory of developers over people using their software, since they didn’t bother writing syntax parsers and let users cope with the complicated syntax.

    Another example was the difference between Linux’s iptables and OpenBSD’s pf. While the former was supposed to be the backend for a firewall handling tool that never saw the light of day, the latter provided a clean syntax.

  3. Infrastructure as code

    Fast forward a couple of years, most users of cfengine were fed up with its limitations, architectures while following the same logic as before became bigger and bigger. The need for repeatable and sane environments was as important as it ever was.

    At that point of time, PXE installations were added to the mix of big infrastructures and many people started looking at puppet as a viable alternative to cfengine.

    puppet provided a cleaner environment, and allowed easier formalization of technology, platform and configuration. Philosophically though, puppet stays very close to cfengine by providing a way to configure large amounts of system through a central repository.

    At that point, large architectures also needed command and control interfaces. As noted before, most of these were implemented as perl or shell scripts in SSH loops.

    On the monitoring and graphing front, not much was happening, nagios and cacti were almost ubiquitous, while some tools such as ganglia and collectd were making a bit of progress.

Where are we now ?

At some point recently, our applications started doing more. While for a long time the canonical dynamic web application was a busy forum, more complex sites started appearing everywhere. We were not building and operating sites anymore but applications. And while with the help of haproxy, varnish and the likes, the frontend was mostly a settled affair, complex backends demanded more work.

At the same time the advent of social enabled applications demanded much more insight into the habits of users in applications and thorough analytics.

New tools emerged to help us along the way:

  • In memory key value caches such as memcached and redis
  • Fast elastic key value stores such as cassandra
  • Distributed computing frameworks such as hadoop
  • And of course on demand virtualized instances, aka: The Cloud
  1. Some daemons only provide small functionality

    The main difference in the new stack found in backend systems is that the software stacks that run are not useful on their own anymore.

    Software such as zookeeper, kafka, rabbitmq serve no other purpose that to provide supporting services in applications and their functionality are almost only available as libraries to be used in distributed application code.

  2. Infrastructure as code is not infrastructure in code !

    What we missed along the way it seems is that even though our applications now span multiple machines and daemons provide a subset of functionality, most tools still reason with the machine as the top level abstraction.

    puppet for instance is meant to configure nodes, not cluster and makes dependencies very hard to manage. A perfect example is the complications involved in setting up configurations dependent on other machines.

    Monitoring and graphing, except for ganglia has long suffered from the same problem.

The new tools we need

We need to kill local configurations, plain and simple. With a simple enough library to interact with distant nodes, starting and stopping service, configuration can happen in a single place and instead of relying on a repository based configuration manager, configuration should happen from inside applications and not be an external process.

If this happens in a library, command & control must also be added to the mix, with centralized and tagged logging, reporting and metrics.

This is going to take some time, because it is a huge shift in the way we write software and design applications. Today, configuration management is a very complex stack of workarounds for non standardized interactions with local package management, service control and software configuration.

Today dynamically configuring bind, haproxy and nginx, installing a package on a Debian or OpenBSD, restarting a service, all these very simple tasks which we automate and operate from a central repository force us to build complex abstractions. When using puppet, chef or pallet, we write complex templates because software was meant to be configured by humans.

The same goes for checking the output of running arbitrary scripts on machines.

  1. Where we’ll be tomorrow

    With the ease PaaS solutions bring to developers, and offers such as the ones from VMWare and open initiatives such as OpenStack, it seems as though virtualized environments will very soon be found everywhere, even in private companies which will deploy such environments on their own hardware.

    I would not bet on it happening but a terse input and output format for system tools and daemons would go a long way in ensuring easy and fast interaction with configuration management and command and control software.

    While it was a mistake to try to push XML as a terse format replacing configuration file to interact with single machines, a terse format is needed to interact with many machines providing the same service, or to run many tasks in parallel - even though, admittedly , tools such as capistrano or mcollective do a good job at running things and providing sensible output.

  2. The future is now !

    Some projects are leading the way in this new orientation, 2011 as I’ve seen it called will be the year of the time series boom. For package management and logging, Jordan Sissel released such great tools as logstash and fpm. For easy graphing and deployment etsy released great tools, amongst which statsd.

    As for bridging the gap between provisionning, configuration management, command and control and deploys I think two tools, both based on jclouds1 are going in the right direction:

    • Whirr2: Which let you start a cluster through code, providing

    recipes for standard deploys (zookeeper, hadoop)

    • pallet3: Which lets you describe your infrastructure as code and

    interact with it in your own code. pallet’s phase approach to cluster configuration provides a smooth dependency framework which allows easy description of dependencies between configuration across different clusters of machines.

  3. Who’s getting left out ?

    One area where things seem to move much slower is network device configuration, for people running open source based load-balancers and firewalls, things are looking a bit nicer, but the switch landscape is a mess. As tools mostly geared towards public cloud services will make their way in private corporate environments, hopefully they’ll also get some of the programmable


          How i can repair dnf?   
Recently i've update via dnf upgrade. After reboot, i got this error with dnf: [root@mgpc]#: dnf
Traceback (most recent call last): File "/bin/dnf", line 57, in from dnf.cli import main File "/usr/lib/python3.5/site-packages/dnf/__init__.py", line 31, in import dnf.base File "/usr/lib/python3.5/site-packages/dnf/base.py", line 26, in from dnf.comps import CompsQuery File "/usr/lib/python3.5/site-packages/dnf/comps.py", line 29, in import dnf.util File "/usr/lib/python3.5/site-packages/dnf/util.py", line 31, in import librepo File "/usr/lib64/python3.5/site-packages/librepo/__init__.py", line 1070, in import librepo._librepo ImportError: librtmp.so.0: cannot open shared object file: No such file or directory [root@mgpc]#: uname -a
Linux mgpc 4.11.6-201.fc25.x86_64 #1 SMP Tue Jun 20 20:21:11 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          dnf fail after update   
Recently i've update via dnf upgrade. After reboot, i got this error with dnf: [root@mgpc]#: dnf
Traceback (most recent call last): File "/bin/dnf", line 57, in from dnf.cli import main File "/usr/lib/python3.5/site-packages/dnf/__init__.py", line 31, in import dnf.base File "/usr/lib/python3.5/site-packages/dnf/base.py", line 26, in from dnf.comps import CompsQuery File "/usr/lib/python3.5/site-packages/dnf/comps.py", line 29, in import dnf.util File "/usr/lib/python3.5/site-packages/dnf/util.py", line 31, in import librepo File "/usr/lib64/python3.5/site-packages/librepo/__init__.py", line 1070, in import librepo._librepo ImportError: librtmp.so.0: cannot open shared object file: No such file or directory [root@mgpc]#: uname -a
Linux mgpc 4.11.6-201.fc25.x86_64 #1 SMP Tue Jun 20 20:21:11 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Microsoft Office 2010 Beta- Free Download   
Just noticed that Microsoft are offering a free download of the beta release of Microsoft Office 2010 Pro Plus. I’ve not has a chance to try it yet however I think it looks a lot better than the 2007 release. Sadly that’s all the details I have right now but I will set it up […]
          Hey Everyone   
Hey everyone this is my blog all about the Linux Operating system as well as stuff about MS Windows and Mac OSX.
          System 76: Linux-Hardware-Hersteller baut eigene Distro   
Mit Pop OS präsentiert das Unternehmen System 76, der beliebte Hersteller von Hardware mit vorinstalliertem Linux, eine eigene Ubuntu-basierte Distribution, deren Desktop dem Benutzer nicht "im Weg sein soll". (Linux, Ubuntu)
          Can the Linux Stack Clash Vulnerability Affect Containers?   
The recently discovered ‘Stack Clash’ vulnerability in Linux-based systems is another critical security issue like Dirty Cow, but can the stack clash vulnerability affect containers, and what could an attacker do? The short answer is yes, an attacker could exploit the vulnerability to gain root privileges within a container, but not necessarily be able to […]
          Siemens NX 11.0.1 MP04 Win/Linux Update-SSQ | 1.1 GB|   
Siemens NX 11.0.1 MP04 Win/Linux Update-SSQ | 1.1 GB Improvements General Top-Bottom stereo output layout has been implemented Materials in the Asset Editor can now be renamed during Interactive rendering Grasshopper Components have been Reorganized The EXR, HDR and VRIMG file format have been added when saving Bongo animation The VFB Stop button now stops the entire Batch Rendering process Vertex Colors have been added and placed on the UV Texture when the channel is set to 0 Added scr function GetObjectVRayName Added visSetFocusPoint option to use Target point - this is a legacy option to work as in V-Ray 2.0 /span> The ability to export vrscene during Batch Render have been added The ability to control the reflection depth and refraction depth to the BRDF Reflection and BRDF Refraction layer independently have been added. Dialog window prompts when the imported geometries have a material already used in the current scene Object as Clipper - this allows you to clip with a custom geometry (nurbs or mesh) Set Materials ID to Black have been added to the V-Ray Tool menu. This option will set a black material ID color to all materials Installation & Licensing The Online License Server (OSL) installer has been updated to version 4.4.1 The SWARM installer has been updated to version 1.3.6 V-Ray License optimizations at Rhino start-up have been implemented, V-Ray will now check license from the first plug-in interaction. User Interface Asset Editor icons have been updated to have consistent colors. Grayed out buttons can now be easily identified The Production and Interactive render buttons now become active when pressed. The state indicates whether a rendering process is ongoing. They can also be used to stop the rendering when deactivated. This applies to both the V-Ray toolbar and the Asset Editor buttons Match Viewport mode has been added to the Render Output / Aspect Ratio drop-down menu. This mode will make sure that the rendered image aspect matches exactly the viewport aspect ratio Update button has been added to the Match Viewport options. It lets you update the aspect ratio in case the viewport resolution changes during interactive rendering The "Show Progress Window" have been moved from the "Tool" menu to the "V-Ray Render and Options" sub-menu The Pack Scene tool has been renamed to Pack Project Geometry Type icons have been added in the Asset Editors Geometry list. The icons identify the geometry type even if a custom object name has been used VFB and Asset Editor window parenting has been improved The Pick Focus Point and Update (get viewport aspect) button icons have been updated Max Depth option has been implemented for the Reflection and Refraction material layers V-Ray Edges texture Pixel Width can now be used to control edge thickness Use 3d Mapping checkbox has been implemented for all 3D textures. Previously those textures were always using the 3d Mapping mode. Now they can also be used as 2D maps and manipulated with the UVW controls The animation speed of the UI toggle buttons has been increased The "Edit Material" have been moved from "V-Ray Render and Option" to the "V-Ray Material" sub-menu "V-Ray RT" the option to render in viewport have been renamed to "V-Ray Interactive" The 'Progressive' switch button is now grayed out in Interactive mode Lights Light Type icons have been added to the Asset Editors Light list. The icons identify the light source type and can be used to enable or disable the light Light enable/disable toggles have been implemented for all light types Directionality parameter has been implemented for the Light Rectangle light source Light Rectangle parameters have been re-organized. Portal Light toggle and drop-down menu have been implemented. Diffuse and specular contribution sliders have been added to the light parameters. No Decay option have been added. Light Sphere parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. No Decay option have been added. Light Sun parameters have been re-organized. Sky and Ground Albedo parameters are now split into separate tabs. Light Mesh parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. No Decay option have been added. Light Dome parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. Shape drop-down menu has been added to the light parameters. It replaces the Dome Spherical option and toggles between a hemispherical and spherical light shape. Affect Alpha option have been added. Light Spot parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters Light IES parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. Light shape override options have been implemented. They can be used to change shadow softness. Light Omni parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. Light Directional parameters have been re-organized. Diffuse and specular contribution sliders have been added to the light parameters. Bug Fixes Changing the anti-aliasing filter type and options will now be properly saved with the Rhino scene. Crash with light names Proxy's that are in a block disappear after the first render or when editing the block Hidden Proxy are processing during the render. Editing Linked Block created a duplicate of the material. After Non-Interactive GPU rendering is complete, the VFB Stop button remained active. Aerial Perspective becomes denser every time the Interactive Render updates. Crash when reading malformed materials Emissive textures were not showing in the Rhino viewport Fog color was not transferring from V-Ray 2.0 to V-Ray 3.0 properly Rhino immediately stops responding as soon as scene is loaded Materials in scenes made using a beta build would lose their diffuse maps. Pack scene not working with big scenes Caustics is not working properly Opening Frame Buffer History options window crashes Rhino Setting a value bigger than the default slider range does not update the slider The Material Library panel does not retract when V-Ray objects/lights are selected in viewport Lights color and texture are not being read when rendering in CUDA mode Dome Light default orientation does not match the Environment orientation Changing Mesh Light position during Interactive Rendering with Swarm produces behind copies of the light Adding/Updating a Light's texture during Interactive causes the process to stop Selecting objects through the Asset Editor behaves like sticky selection Material color fails to show in the viewport when you add a new layer at the top of the current layer Issues related to batch render and animation rendering Wrong noise scale in materials coming from V-Ray 2.0 The Asset Editor's Render button and Renderer section remain in "rendering" state even after the process has finished DOWNLOAD: http://nitroflare.com/view/D06D39A8E0CF03A/VRay.3.40.02.for.Rhino.5.Win.part1.rar http://nitroflare.com/view/098C14F1A751F22/VRay.3.40.02.for.Rhino.5.Win.part2.rar
          TER-BOND Bond   
TER-BOND Bond

TER-BOND Bond

Teradek TER-BOND (TERBOND, TER/BOND, ter-bond) Bond3G and 4G Bonding Solution for Remote Video TransmissionTeradek Bond allows on-location video transmission over mobile networks by combining multiple 3G dongles to give fast reliable upload speeds at a fraction of the cost and size of alternatives. Quality video can be uploaded to the internet or to a receiver from anywhere with a 3G network from this camera mounted device. The combination of Cube and Bond is the smallest, lowest power and most cost effective 3G bonding solution available.HD Streaming Over Multiple 3G NetworksAchieved by incorporating cutting edge technologies such as low power hardware-based video compression and advanced streaming options like MPEG-TS, RTMP, with Teradek's revolutionary Adaptive Internet Streaming technology, which constantly adjusts bit rate and buffering on the fly based on varying network conditions.Includes MPEG-TS Compliant AGGREGATION Software: Sputnik To re-combine the multiple data streams, we have developed Sputnik, Teradek's proprietary aggregation software. Sputnik is available as a free download and can be run on any Linux server or hosted in the cloud to reconstruct your video into a single MPEG-TS stream that is compatible with most H.264 IP decoders, including the Cube decoder -- the smallest and lowest priced H.264 to HD-SDI decoder available.Teradek Bond is compatible with CUBE-155, CUBE-150, CUBE-250 and CUBE-550.


          TER-BOND BOND 3/4G Bonding Solution (Excludes CUBE and TS Option)   
TER-BOND BOND 3/4G Bonding Solution (Excludes CUBE and TS Option)

TER-BOND BOND 3/4G Bonding Solution (Excludes CUBE and TS Option)

TER-BOND BOND 3/4G Bonding Solution (Excludes CUBE and TS Option) 3G and 4G Bonding Solution for Remote Video Transmission Deliver HD from anywhere in the world to anywhere in the world. A true marvel of engineering, this device can host 3G and 4G dongles from any cellular network (or networks) for video transmission at high data rates. Incredible power from such a tiny device. Features: Allows on location video transmission over mobile networks HD Streaming Over Multiple 3G Networks Includes MPEG-TS Compliant AGGREGATION Software: Sputnik Compatible with CUBE-155, CUBE-150, CUBE-250 and CUBE-550 * Requires CUBE-155, CUBE-150 / CUBE-250 / CUBE-550 Teradek Bond allows on-location video transmission over mobile networks by combining multiple 3G dongles to give fast reliable upload speeds at a fraction of the cost and size of alternatives.  Quality video can be uploaded to the internet or to a receiver from anywhere with a 3G network from this camera mounted device. The combination of Cube and Bond is the smallest, lowest power and most cost effective 3G bonding solution available. HD Streaming Over Multiple 3G Networks Achieved by incorporating cutting edge technologies such as low power hardware-based video compression and advanced streaming options like MPEG-TS, RTMP, with Teradek's revolutionary Adaptive Internet Streaming technology, which constantly adjusts bit rate and buffering on the fly based on varying network conditions. Includes MPEG-TS Compliant AGGREGATION Software: Sputnik To re-combine the multiple data streams, we have developed Sputnik, Teradek's proprietary aggregation software. Sputnik is available as a free download and can be run on any Linux server or hosted in the cloud to reconstruct your video into a single MPEG-TS stream that is compatible with most H.264 IP decoders, including the Cube decoder -- the smallest and lowest priced H.264 to HD-SDI decoder available. Teradek Bond is compatible with CUBE-155, CUBE-150, CUBE-250 and CUBE-550.


          TER-BOND2HDMI BOND II Integrated HDMI Cellular Bonding Solution   
TER-BOND2HDMI BOND II Integrated HDMI Cellular Bonding Solution

TER-BOND2HDMI BOND II Integrated HDMI Cellular Bonding Solution

Teradek TER-BOND2HDMI (TERBOND2HDMI, TER BOND2HDMI, TER/BOND2HDMI, ter-bond2hdmi) BOND II Integrated HDMI Cellular Bonding Solution Supports up to 6 Modems, includes MPEG-TS The Teradek Bond line of cellular bonding solutions allows video professionals to broadcast 1080p HD video over aggregated bandwidth from several network interfaces, including 3G/4G/LTE, WiFi, BGAN, Ethernet*, and Fiber. Bond devices utilize hardware-based high profile H.264 compression, resulting in very low power consumption and long run times. Each bonding solution offers a local monitoring capability on iOS devices, quick access to settings via an OLED display, and IFB for communication from the studio to the field. All Bond devices require a Sputnik server, which converts each bonded feed into a standard video format that can be sent to any streaming platform on the Web or to several H.264 decoders.   *Ethernet bonding requires USB to Ethernet adapters   Sputnik Server & Dashboard GUI  Sputnik is Teradek's free software application between Bond and your streaming destination. Sputnik is designed to run on a Linux computer either in the cloud using Amazon EC2) or on a local server with a single, publicly addressable TCP port. The software recombines the packets from your cellular modems into a cohesive stream that can be sent to an H.264 decoder or viewed online. HDMI input Six USB slots for 3G/4G/LTE modems, BGAN, Ethernet, WiFi Modem mounting system 250 Kbps to 10 Mbps HDMI input Supports all standard resolutions and frame rates up to 1080p30 IFB support


          FreeDOS Is 23 Years Old, and Counting   

The FreeDOS Project has just reached its 23rd birthday! This is a major milestone for any free software or open-source software project. more>>


          J. and K. Fidler's Cut the Cord, Ditch the Dish, and Take Back Control of Your TV (Iron Violin Press)   

Prospective TV cable-cutters, even those with technical abilities, often are flummoxed in the face of choosing between all of the content options and new technologies available. Reliable sources of complete and neutral information in this space are hard to find, and the fun evaporates rapidly when you're faced with hours of stumbling through forums and strings of searches. more>>


          Linux Systems Administrator - RealInterface - Washington, DC   
Linux Systems Administrator Direct Hire Washington, DC Candidates must be willing to work as a W2 Employee or 1099. This position does not allow Corp to
From Realinterface - Wed, 10 May 2017 03:18:20 GMT - View all Washington, DC jobs
          PRODUCTLINUX.COM   
Auction Type: Bid, Auction End Time: 06/30/2017 07:00 AM (PDT), Price: $20, Number of Bids: 0, Domain Age: 0, Description: , Traffic: 13, Valuation: $0, IsAdult: false
          SOULINUX.ORG   
Auction Type: Bid, Auction End Time: 06/30/2017 07:00 AM (PDT), Price: $20, Number of Bids: 0, Domain Age: 0, Description: , Traffic: 0, Valuation: $0, IsAdult: false
          SUDOSULINUX.COM   
Auction Type: Bid, Auction End Time: 06/30/2017 07:00 AM (PDT), Price: $20, Number of Bids: 0, Domain Age: 0, Description: , Traffic: 11, Valuation: $0, IsAdult: false
          Cara Membuka File 001   
Buat yang belum mengerti cara membaca atau membuka file .001, .002, .003, .004, 005, .006 dan sebangsanya. Misalnya file film berjudul Titanic.avi.002 (dan biasanya ukuran file .001, .002, .003, .004, 005. ini cukup besar, mencapai puluhan hingga ratusan MB). Anda bisa memanfaatkan program atau aplikasi yang namanya HJ-SPLIT.

Aplikasi HJ-SPLIT ini bisa Anda peroleh secara gratis dari situs pembuatnya di www.hjsplit.org. Aplikasi ini hanya cocok untuk sistem operasi Windows XP, Windows 7, Windows Vista, Windows NT, Linux/wine.

Bagi anda yang masih memakai sistem operasi Windows 9x, atau Windows ME, saya sarankan segera beralih ke Sistem operasi Windows XP, Windows 7, Windows Vista, Windows NT, Linux/wine aja. (Eitssss, ini cuma saran aja lhoo, Windows 9x, atau Windows ME juga bisa kok ngejalanin Aplikasi HJ-SPLIT ini, hehehe...)


Dan berikut ini adalah langkah-langkah untuk membuka file .001, .002, .003, .004, 005, .006 dst.
  • Kumpulkan semua file dari file .001 s/d yang terakhir (sampai komplit) dalam satu Folder
  • Jalankan program/aplikasi HJ-SPLIT
  • Klik tombol JOIN
  • Ikuti prosesnya sampai selesai.

          Google, Yahoo and Facebook Committed to Support IPv6   
IPv6_ready_logo
Tech Info - CALIFORNIA - Only one day after experiment for 24 hours using IPv6 and got successfully , Google, Yahoo and Facebook said they would be permanently to upgrade to the Internet protocol on their main site.

As reported by Info World on Saturday June 11, 2011, "We saw 65 percent growth in our IPv6 traffic on the Day of the World IPv6," said Lorenzo Colitti, IPv6 Software Engineer at Google.

Besides Google, Facebook as a giant social networking also got a encouraging results when tested the internet protocol.

"At Facebook, we saw over than a million our users visited us in IPv6," said a senior network engineer at Facebook, Don Lee.

"There are no technical problems along period of 24 hours. It's supported by the positive comments on our blog, this is very attractive to see the passionate of the peoples on the IPv6 around the world," he added.

After obtaining the positive result, Facebook has been decided to support the IPv6 on its website for the developers, at the URL : http://developers.facebook.com/

"We will keep to adapt on our entire basic code to support IPv6," said Lee again. "IPv6 allow the Internet to continue an amazing development," he concluded.
          (USA-MI-Kalamazoo) Lead Programmer   
KCMHSAS is seeking a full-time Lead Programmer to develop solutions to allow for effective and efficient service delivery. Development will be done in the Microsoft technology stack. (SQLServer, C#, .Net, sharepoint). Deep knowledge of SQLServer Analysis Services, Integration Services, Reporting services are necessary. Open source systems (linux, bsd) and python GIT, knowledge is a plus. Bachelor’s Degree in Information Technology, Computer Science with 3 years of relevant experience required. Understanding of the public mental health system preferred.We offer competitive compensation and fringe benefits, including medical, vision and dental insurance; disability and workers compensation insurance; paid holidays, generous Paid Time Off plan, continuing education, retirement plan and Deferred Compensation Plan.Individuals of diverse racial, ethnic, and cultural backgrounds along with bilingual candidates are encouraged to apply. KCMHSAS is an equal opportunity employer that encourages diversity and inclusion among its workforce. We strive to empower people to succeed. Physical Requirements / Working Conditions: Physical Efforts – Job demands include prolonged sitting and standing as appropriate. May occasionally require light lifting up to 25 pounds, stooping, kneeling, crouching, or bending as appropriate. Requires coordination of hands and/or eye/hand/foot. Working Conditions – Office environment with noise from computers, copy machine, and telephones. Use of video display terminal (VDT) for periods in excess of 30 minutes at a time. Possible eyestrain from extended periods of viewing VDT. May be exposed to bloodborne pathogens, infectious diseases, and parasites. Travel throughout the Kalamazoo area is required.
          Comment on Undercurrent by Lottie   
Skype has opened its web-dependent client beta for the entire world, soon after introducing it generally from the United states and You.K. earlier this calendar month. Skype for Online also now supports Linux and Chromebook for instant text messaging communication (no video and voice but, individuals call for a plug-in installing). The increase from the beta contributes assist for a longer list of dialects to help reinforce that global functionality
          Comment on 45 Photo Sharing Sites by Anya   
Skype has established its internet-dependent buyer beta on the entire world, following establishing it largely within the U.S. and You.K. before this month. Skype for Website also now can handle Linux and Chromebook for immediate online messaging communication (no voice and video yet, those demand a plug-in installment). The expansion in the beta provides support for a longer selection of languages to help you strengthen that global user friendliness
          Junior/Senior Drupal Software Developer - Acro Media Inc. - Okanagan, BC   
Some understanding of Linux hosting if at all possible. Hopefully, you have installed Linux on something at least once, even if it was just your Xbox, phone.....
From Acro Media Inc. - Wed, 12 Apr 2017 12:28:59 GMT - View all Okanagan, BC jobs
          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          Machine Learning Developer - Intel - Toronto, ON   
6 (+) months experience with developing software in Linux and/or Windows,. Develop Machine Learning solutions in form of libraries and accelerators for FPGA, SW...
From Intel - Fri, 30 Jun 2017 10:25:27 GMT - View all Toronto, ON jobs
          Embedded Electrical Engineer - Littelfuse - Saskatchewan   
Real Time Operating System (RTOS) experience such as FreeRTOS, MQX, or Embedded Linux. If you are motivated to succeed and can see yourself in this role, please...
From Littelfuse - Thu, 15 Jun 2017 23:12:35 GMT - View all Saskatchewan jobs
          INFRASTRUCTURE SPECIALIST - A.M. Fredericks - Ontario   
3 – 5 years managing Linux servers. Ability to build and configure servers from the ground up, both Linux &amp; Windows....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:14 GMT - View all Ontario jobs
          DEVOPS PROGRAMMER - A.M. Fredericks - Ontario   
Proficient in LINUX (Python/BASH scripting/CRON). Our company looking to automate core functionality of the business....
From A.M. Fredericks - Sat, 17 Jun 2017 05:07:08 GMT - View all Ontario jobs
          Software engineer - Stratoscale - Ontario   
Highly proficient in a Linux environment. Our teams are spread around the globe but still work closely together in a fast-paced agile environment....
From Stratoscale - Thu, 15 Jun 2017 16:47:38 GMT - View all Ontario jobs
          Solution Architects - Managed Services - OnX Enterprise Solutions - Ontario   
Working knowledge of both Windows and Linux operating systems. OnX is a privately held company that is growing internationally....
From OnX Enterprise Solutions - Mon, 12 Jun 2017 20:52:05 GMT - View all Ontario jobs
          IT Lead, Server Support - Toronto Hydro - Ontario   
Knowledge of server and storage systems such as Red Hat Enterprise Linux, Windows Server 2008 and 2012 Operating Systems, Active Directory, Virtual Desktop...
From Toronto Hydro - Mon, 12 Jun 2017 19:52:14 GMT - View all Ontario jobs
          Senior EPM Solutions Architect - AstralTech - Ontario   
Solid understanding of Linux and Microsoft Windows Server operating system. Senior EPM Solutions Architect....
From AstralTech - Sat, 13 May 2017 08:40:54 GMT - View all Ontario jobs
          DÉVELOPPEUR JAVA FULL-STACK (H/F) - ON-X - Ontario   
Système d'exploitation/ virtualisation ( Linux, Windows, VMware, Docker ). En qualité de Développeur Full-Stack , vous réaliserez les missions suivantes :....
From ON-X - Thu, 06 Apr 2017 07:41:27 GMT - View all Ontario jobs
          Développeur Java - Linux/Android (H/F) Expertise technique - ON-X - Ontario   
Linux et Android. Vous aurez pour mission le développement et la conception des applications Android....
From ON-X - Wed, 05 Apr 2017 07:39:29 GMT - View all Ontario jobs
          Développeurs - Intégrateurs techniques Java J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Vous serez en charge du développement de solutions innovantes d'authenfication forte....
From ON-X - Sun, 02 Apr 2017 07:26:34 GMT - View all Ontario jobs
          Développeur J2EE (H/F) Expertise technique - ON-X - Ontario   
Windows, Linux, VMWare. Dans le cadre de notre développement, nous sommes à la recherche d'un(e) Ingénieur Etudes et Développement JAVA J2EE....
From ON-X - Sun, 02 Apr 2017 07:26:32 GMT - View all Ontario jobs
          Product Verification Engineer - Evertz - Ontario   
Proficient with Linux and high level programming or scripting languages such as Python. We are expanding our Product Verification team and looking for...
From Evertz - Fri, 24 Mar 2017 05:42:40 GMT - View all Ontario jobs
          Customer Success Engineer - Stratoscale - Ontario   
Extensive experience troubleshooting remote Linux system issues. As a Customer Success Engineer, you will be providing technical assistance to Stratoscale...
From Stratoscale - Thu, 09 Mar 2017 21:32:53 GMT - View all Ontario jobs
          Device Driver Development Engineer - Intel - Singapore   
Knowledge of XDSL, ETHERNET switch, wireless LAN, Security Engine and microprocessor is an advantage. Linux Driver/Kernel development for Ethernet/DSL/LTE Modem...
From Intel - Sat, 17 Jun 2017 10:23:08 GMT - View all Singapore jobs
          Software Engineer - Embedded & Linux - (Boston)   
Job Description This role is located in our corporate headquarters which are located in Peabody, MA - less than 20 miles North of Boston. AES Corporation is the leader in long-range wireless communications products for the security industry. AES products are deployed in over 85 countries around the globe.
          Software Engineer - Linux Server - (Boston)   
Job Description This role is located in our corporate headquarters which are located in Peabody, MA - less than 20 miles North of Boston. AES Corporation is the leader in long-range wireless communications products for the security industry. AES products are deployed in over 85 countries around the globe.
          Cyber Security Engineer - (Newton)   
Job DescriptionWant to work for a dynamic company that feels nice and compact but boasts the perks of companies several times its size? With its rapid growth and global nature, Octo may be the place for you! Octo Telematics, NA is seeking a Cyber Security Engineer to design, test, implement and monitor security measures for OctoA cents € (TM) s Systems.Responsibilities:A' . Analyze and establish security requirements for OctoA cents € (TM) s systems/networksA' . Defend systems against unauthorized access, modification and/or destructionA' . Configure and support security tools such as firewalls, anti-virus software, patch management systems, etc.A' . Define access privileges, control structures and resourcesA' . Perform vulnerability testing, risk analyses and security assessmentsA' . Identify abnormalities and report violationsA' . Oversee and monitor routine security administrationA' . Develop and update business continuity and disaster recovery protocolsA' . Train fellow employees in security awareness, protocols and proceduresA' . Design and conduct security audits to ensure operational securityA' . Respond immediately to security incidents and provide post-incident analysisA' . Research and recommend security upgradesA' . Provide technical advice to colleaguesQualifications:A' . Bachelor in Computer Science, Cyber Security or a related technical fieldA' . 5 years plus experience in Cyber SecurityA' . 5 years plus experience in Cyber SecuritySecurity Expertise:A' . Expertise in security technology with one r more product certification in (BlueCoat, Cisco, SonicWall, Damballa, IBM, Kapersky, MSAB, Microsoft AD, TippingPoint, F5, VMware)TCP/IP, computer networking, routing and switchingDLP, anti-virus and anti-malwareFirewall and intrusion detection/prevention protocolsSecure coding practices, ethical hacking and threat modelingWindows, UNIX and Linux operating systemsISO 27001/27002, ITIL and COBIT frameworksPCI, HIPAA, NIST, GLBA and SOX compliance assessmentsC, C++, C#, Java or PHP programming languagesSecurity and Event Management (SIEM)Desirable Security Certifications:A' . Security+: CompTIAA cents € (TM) s popular base-level security certificationA' . CCNA: Cisco Certified Network Associate - Routing and SwitchingA' . CEH: Certified Ethical HackerA' . GSEC / GCIH / GCIA: GIAC Security CertificationsA' . CISSP: Certified Information Systems Security Professional Company DescriptionOCTO NA is a global leader in software and data analytics for the insurance and auto markets, with over four million connected users worldwide and a vast database of 380 billion km of driving data.
          Software Engineer - Linux Server - (Boston)   
Job DescriptionThis role is located in our corporate headquarters which are located in Peabody, MA - less than 20 miles North of Boston.AES Corporation is the leader in long-range wireless communications products for the security industry. AES products are deployed in over 85 countries around the globe. AES is a fast-paced start-up like company environment with room for personal achievement, ownership, and excitement.
          Senior Site Reliability Engineer - (Watertown)   
ID 2017-1880Job Location(s) US-MA-WatertownPosition Type Permanent - Full TimeMore information about this job:Overview: This role is based within our Global Technical Operations team. Mimecast Engineers are technical experts who love being in the centre of all the action and play a critical role in making sure our technology stack is fit for purpose, performing optimally with zero down time.In this high priority role you will tackle a range of complex software and system issues, including monitoring of large farms of servers in multi geographic locations, responding to and safeguarding the availability and reliability of our most popular services.Responsibilities: ResponsibilitiesContribution and active involvement with every aspect of the production environment to include:Dealing with design issues.Running large server farms in multiple geographic locations around the world.Performance analysis.Capacity planning.Assessing applications behavior.Linux engineering and systems administration.Architecting and writing moderately-sized tools.You will focus on solving difficult problems with scalable, elegant and maintainable solutions. Qualifications: RequirementsEssential skills and experience:In depth expertise in Linux internals and system administration including configuration and troubleshooting.Hands on experience with performance tuning of Linux OS (CentOS) in identifying bottlenecks such as disk I/O, memory, CPU and network issues.Extensive experience with at least one scripting language apart from BASH (Ruby, Perl, Python).Strong understanding of TCP/IP networking, including familiarity with concepts such as OSI stack.Ability to analyze network behaviour, performance and application issues using standard tools.Hands on experience automating the provisioning of servers at a large scale (using tools such as Kickstart, Foreman etc).Hands on experience in configuration management of server farms (using tools such as mcollective, Puppet, Chef, Ansible etc).Hands on experience with open source monitoring and graphing solutions such as Nagios, Zabbix, Sensu, Graphite etc.Strong understanding of common Internet protocols and applications such as SMTP, DNS, HTTP, SSH, SNMP etc.Experience running farms of servers (at least 200+ physical servers) and associated networking infrastructure in a production environment.Hands on experience working with server hardware such as HP Proliant, Dell PowerEdge or equivalent.Be comfortable with working on call rotas and out of hours working as and when required to ensure uptime of service's requirements.Desirable skills:Working with PostgreSQL database.Administering Java based applications.Knowledge working with MVC frameworks such as Ruby on Rails.Experience with container technology.Rewards: We offer a highly competitive rewards and benefits package including pension, private healthcare, life cover and a gym subsidization.
          Java/Microservices - (Ipswich)   
Hello, Principal Java/Microservices Software EngineersDuration : 6+ months contract to hireLocation : Ipswich, MARequirements:o Minimum 10 years of experience in specification, design, development, maintenance enterprise-scale mission critical distributed systems with demanding non-functional requirementso Bachelor's Degree in Computer Science, Computer Information Systems or related field of study. Master's Degree preferredo 8+ years of experience with SOA concepts, including data services and canonical modelso 8+ years of experience working with relational databaseso 8+ years of experience of building complex server side solution in Java and/or C#o 8+ years of experience in software development lifecycleo 3+ years of experience building complex solutions utilizing integration frameworks and ESBo Demonstrate strong knowledge and experience applying enterprise patterns to solving business problemsPreferred Qualifications:o Leadership experienceo Strong abilities troubleshooting and tuning distributed environments processing high volume of transactionso Familiarity with model driven architectureo Familiarity with BPM technologieso Experience with any of the following technologies: Oracle, MySQL, SQL Server, Linux, Windows, NFS, Netapp, Rest/SOAP, ETL, XML technologieso In depth technical understanding of systems, databases, networking, and computing environmentso Familiarity with NLP and search technologies, AWS cloud based technologies, Content Management systems, publishing domain, EA frameworks such as TOGAF and Zachmano 2+ years of experience building complex Big Data solutionso Excellent verbal, written and presentation skills with ability to communicate complex technical concepts to technical and non-technical professionalsRegards Pallavi781-791-3115 ( 468 )Java,Microservices,cloud,AWS,architect Source: http://www.juju.com/jad/000000009qiqw5?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Software Development Engineer in Test - Folio - (Ipswich)   
SkillsRequirements:5+ yrs Java & Object Oriented Design/ProgrammingImplementation of 1 or more production RESTful interfaces in a microservices model2+ yrs product implementation experience with databases, both SQL and NoSQL ? PostgreSQL specifically is a plus2+ yrs product implementation experience in a cloud computing environment ? AWS specifically is a plus3+ yrs experience using Agile and/or SAFePreferred Qualifications:CI/CD using (eg) Jenkins, Maven, GradleSCM - Git/GitHubTest Driven Development (TDD) and Automated Unit TestingDeveloping automated integration and acceptance testsAutomating UI testing (eg Selenium, Sauce Labs)Developing performance and load tests at high scale (eg JMeter)General HTTP knowledge including familiarity with cURL or similar toolsLinux ? general knowledge, shell scripting ? RedHat/Amazon Linux specifically is a plusVirtualization ? Docker, Vagrant, etc.Open Source Software ? general knowledge SW dev model, experience contributing toRAML, JSON, XMLJavaScript and related tools/frameworks ? Both client-side and server-side - React, Node.js, webpack, npm/yarn, etc.Security related experience ?SSO, OAuth, SAML, LDAP, etc.Logging/Monitoring/Alerting/Analytics ? SumoLogic, Datadog, collectd, SNMP, JMX, etc.Why the North Shore of Boston and EBSCO are great places to live and work!Here at EBSCO we will provide relocation assistance to the best and brightest people. We are 45 minutes outside of Boston just minutes from the beach in Ipswich, MA. Ipswich is a part of the North Shore and contains a wide variety of locally owned shops, restaurants, and farms.
          Canonical warnt vor kritischer Linux-Lücke   
Angreifer können unter Umständen Schadcode einschleusen und ausführen. Die Schwachstelle erlaubt aber auch Denial-of-Service-Angriffe. Sie steckt im Hintergrundprogramm systemd. Ubuntu und Debian bieten bereits Patches an.
          Andy Wingo: guile 2.2 omg!!!   

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)


          Andy Wingo: encyclopedia snabb and the case of the foreign drivers   

Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to linux.conf.au in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

Click to download video

You can check that video on the youtube if the link above doesn't work; slides here.

Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

  1. Bundle a snapshot of the DPDK into Snabb itself.

  2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

snabbdpdk
Code lines 61K 583K
Contributors (all-time) 60 370
Contributors (since Jan 2016) 32 240
Non-merge commits (since Jan 2016) 1.4K 3.2K

These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

  • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

  • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

  • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

  • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

  • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

  • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

  • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

  • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.


          Phil Normand: WebKitGTK+ and HTML5 fullscreen video   

HTML5 video is really nice and all but one annoying thing is the lack of fullscreen video specification, it is currently up to the User-Agent to allow fullscreen video display.

WebKit allows a fullscreen button in the media controls and Safari can, on user-demand only, switch a video to fullscreen and display nice controls over the video. There’s also a webkit-specific DOM API that allow custom media controls to also make use of that feature, as long as it is initiated by a user gesture. Vimeo’s HTML5 player uses that feature for instance.

Thanks to the efforts made by Igalia to improve the WebKit GTK+ port and its GStreamer media-player, I have been able to implement support for this fullscreen video display feature. So any application using WebKitGTK+ can now make use of it :)

The way it is done with this first implementation is that a new fullscreen gtk+ window is created and our GStreamer media-player inserts an autovideosink in the pipeline and overlays the video in the window. Some simple controls are supported, the UI is actually similar to Totem’s. One improvement we could do in the future would be to allow application to override or customize that simple controls UI.

The nice thing about this is that we of course use the GstXOverlay interface and that it’s implemented by the linux, windows and mac os video sinks :) So when other WebKit ports start to use our GStreamer player implementation it will be fairly easy to port the feature and allow more platforms to support fullscreen video display :)

Benjamin also suggested that we could reuse the cairo surface of our WebKit videosink and paint it in a fullscreen GdkWindow. But this should be done only when his Cairo/Pixman patches to enable hardware acceleration land.

So there is room for improvement but I believe this is a first nice step for fullscreen HTML5 video consumption from Epiphany and other WebKitGTK+-based browsers :) Thanks a lot to Gustavo, Sebastian Dröge and Martin for the code-reviews.


          Sebastian Pölsterl: Announcing scikit-survival – a Python library for survival analysis build on top of scikit-learn   

I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.

About Survival Analysis

Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.

Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t > 0$ when an event occurred or the time $c > 0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0; 1\}$ and the observable survival time $y > 0$. The observable time $y$ of a right censored sample is defined as
\[ y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \\
c & \text{if } \delta = 0 ,
\end{cases}
\]

What is scikit-survival?

Recently, many methods from machine learning have been adapted for these kind of problems: random forest, gradient boosting, and support vector machine, many of which are only available for R, but not Python. Some of the traditional models are part of lifelines or statsmodels, but none of those libraries plays nice with scikit-learn, which is the quasi-standard machine learning framework for Python.

This is exactly where scikit-survival comes in. Models implemented in scikit-survival follow the scikit-learn interfaces. Thus, it is possible to use PCA from scikit-learn for dimensionality reduction and feed the low-dimensional representation to a survival model from scikit-survival, or cross-validate a survival model using the classes from scikit-learn. You can see an example of the latter in this notebook.

Download and Install

The source code is available at GitHub and can be installed via Anaconda (currently only for Linux) or pip.

conda install -c sebp scikit-survival

pip install scikit-survival

The API documentation is available here and scikit-survival ships with a couple of sample datasets from the medical domain to get you started.


          Sebastian Pölsterl: Announcing scikit-survival – a Python library for survival analysis build on top of scikit-learn   

I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.

About Survival Analysis

Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.

Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t > 0$ when an event occurred or the time $c > 0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0; 1\}$ and the observable survival time $y > 0$. The observable time $y$ of a right censored sample is defined as
\[ y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \\
c & \text{if } \delta = 0 ,
\end{cases}
\]

What is scikit-survival?

Recently, many methods from machine learning have been adapted for these kind of problems: random forest, gradient boosting, and support vector machine, many of which are only available for R, but not Python. Some of the traditional models are part of lifelines or statsmodels, but none of those libraries plays nice with scikit-learn, which is the quasi-standard machine learning framework for Python.

This is exactly where scikit-survival comes in. Models implemented in scikit-survival follow the scikit-learn interfaces. Thus, it is possible to use PCA from scikit-learn for dimensionality reduction and feed the low-dimensional representation to a survival model from scikit-survival, or cross-validate a survival model using the classes from scikit-learn. You can see an example of the latter in this notebook.

Download and Install

The source code is available at GitHub and can be installed via Anaconda (currently only for Linux) or pip.

conda install -c sebp scikit-survival

pip install scikit-survival

The API documentation is available here and scikit-survival ships with a couple of sample datasets from the medical domain to get you started.


          DirtyCow: Görülmüş en büyük Linux kernel açığı   

Linux 2.6.22'den itibaren açık mevcut ve CentOS, Ubuntu, Debian, Red Hat vb. tüm sistemleri etkiliyor. Özetle sistem üzerindeki yetkisiz kullanıcı root yetkilerine erişebiliyor.

Açık iki şeye yol açıyor(muş)

  1. Sistem üzerinde yetkisiz bir kullanıcı bu açıktan istifade ederek read-only olan dosyalara yazma yetkisi sağlayabilir.
  2. Sistem üzerindeki kullanıcıya disk binarylerini modifiye etme ve uygun izinler sağlanmadan modifikasyon yapmayı engelleyen izin mekanizmasını devredışı bırakma imkanı sağlıyor.

Sistem bilgim iyi olmadığı için kaba bir çeviriyle yetindim, ilgilisine.

Kaynaklar

https://dirtycow.ninja/
https://github.com/dirtycow/dirtycow.github.io/wik...
https://www.youtube.com/watch?v=kEsshExn7aE
https://bobcares.com/blog/dirty-cow-vulnerability/ (çözüm)
http://arstechnica.com/security/2016/10/android-ph... (android)


          System Administrator Senior (Unix) - Freddie Mac - Reston, VA   
Responsible for regular backup and restore operations for AIX, Linux and Solaris Systems using Tivoli Storage Manager....
From Freddie Mac - Fri, 23 Jun 2017 17:52:19 GMT - View all Reston, VA jobs
          IT Generalist II (Unix Systems Admin) - Freddie Mac - Reston, VA   
Responsible for regular backup and restore operations for AIX, Linux and Solaris Systems using Tivoli Storage Manager....
From Freddie Mac - Tue, 20 Jun 2017 23:48:24 GMT - View all Reston, VA jobs
          Jetpack 2: Released!!   
Five days ago, on June 24, the game Jetpack 2 was finally released.

I've been waited for this game for a long time: 17 years at least.  I still remember reading "Wait for Jetpack 2 in 2000" when leaving Jetpack, the original game, which I was hooked into.

The game is a great improvement to the original Jetpack, which ran on DOS.
Start screen of the game Jetpack



Start screen of Jetpack 2
While the current release is for Windows, it is playable on Linux thanks to Wine.  Still, the developer says he is looking into porting it to other platforms.

This is what Jetpack 2 looks like

Well, at least I will be able to play the game while I wait for the Linux version, hehe!
          Using Gazelle for a Greasemonkey Script   
scp-greasemonkey

Hi all readers! I've been working very hard for my start up in the last few months, and before that was busy moving to France and so on, so I have not blogged much. However, development on Gazelle has proceeded despite the absence of public updates.

The language has developed significantly, with most improvements being subtle under the hood bug fixes and performance improvements.

I thought I would write a new tutorial on using it that would highlight a few of these new features!

The `Secure, Contain, Protect` Wiki

As a child of modernity, I am very interested in developments which are unique to this particular technological age. Blogs and micro-media like Twitter are the most obvious examples of media enabled by modernity, but they are, as works primarily authored by one person and limited in scope to personal goings ons, not particularly interesting.

Far more interesting, to me, are collaborative fiction projects like Orion's Arm and the The SCP Wiki. These are community oriented and driven creative projects where members write fiction in a shared universe setting. The requirements of such a project produce interesting story-telling strategies: The SCP wiki, for instance, has, as its major unit of story telling, entries describing various supernatural artifacts or entities. Woven through these entries are larger narratives, connection multiple entries and characters together. This structure allows easy collaboration between users towards the creation of a large fictional universe.

So I spend a lot of time, when I can, reading the SCP wiki because I find both its content and its form inspiring.

However...

I do have a small gripe with the SCP wiki's design. One can browse the SCP entries by tag (for instance, extradimensional), but if you look at the page, you see just an unsorted list of SCP entries, where each link is just `SCP-NNN`, N being the SCP number. The style sheet at the wiki also seems to not mark visited SCP entries with a different color, so these pages make reading through a set of entries kind of difficult: you frequently find yourself clicking multiple times on the same entry, only to find that you've read it before.

So today we will write a Greasemonkey script which makes these tag pages a bit easier to browse. We'll use Gazelle, of course.

Setting up Gazelle

Gazelle might work on Windows, but for now it requires linux. You need Shadchen-el, my Emacs Lisp pattern matching library, and Gazelle, of course. Get them via git:


git clone https://github.com/VincentToups/shadchen-el.git
git clone https://github.com/VincentToups/gazelle.git

And add both directories created above to your Emacs Lisp load path. Gazelle is a lisp and so I heartily recommend using ParEdit too.

SUPER IMPORTANT

It is an absolute necessity to byte-compile Gazelle and Shadchen. They are very macro heavy libraries and perform hundreds of times better when byte-compiled.

Visit shadchen.el with emacs and then M x byte-compile-and-load <RETURN> to ensure that Shadchen is byte compiled. With Gazelle you can visit the file rebuild.el and do the same to rebuild all of Gazelle. Note that Gazelle and Shadchen tend to use a lot of special variable binding depth and stack, so rebuild.el also sets those limits quite high.

Writing Our Script

Gazelle now includes a pretty complete major mode for editing Gazelle source code. Since I'm writing this document with Emacs Muse, the syntax highlighting visible here is that of the Gazelle Major Mode. It highlights most of the Gazelle primitive operations and most higher level Gazelle forms.

Let's write our script!

Visit a file called scp-tag-page-fixer.gazelle. This will open a buffer in Gazelle mode, ready for our Greasemonkey script. Greasemonkey scripts have specially formatted comments at the top of the page which interface with the Greasemonkey system, so our first piece of code is just:


(comment
"==UserScript==
@name        Fix SCP Tag Page
@namespace   scp wiki
@description Sort and add descriptions to SCP Tag Pages
@include     http://www.scp-wiki.net/system:page-tags/tag/*
@require     http://code.jquery.com/jquery-1.10.2.js
@version     1
==/UserScript==")

We can already compile this to Javascript by starting gz:transcode-this-file, bound to C-c C-k, but of course the result will just a JS file with the above comment.

Meat and Potatoes

There are certain things about Lisp and Javascript which are just fundamentally out of step. One of these is that Javascript has the notion of `operators` for common functions like +,-, *, and so on. In Gazelle one can access these operators by using the primitive symbols _+, _-, _*, etc, but this is ugly and weird. You cannot, for instance, say


(.. _+ (apply this [: 1 2 3]))

Which is reasonable behavior. Gazelle is designed to be used with its module system, and one almost always imports the module hooves/hooves, which implements function versions of all of these operators, so that you can say:


(require (("hooves/hooves" :all))
 (.. + (apply this [: 1 2 3])))

but it is difficult to use such a system for a Greasemonkey project. For this sort of one-off script, Gazelle exposes a special form, (gazelle:essentials) which expands to the contents, more or less, of hooves/hooves so that you can access them in your script. So our script should now be:


(comment
"==UserScript==
@name        Fix SCP Tag Page
@namespace   scp wiki
@description Sort and add descriptions to SCP Tag Pages
@include     http://www.scp-wiki.net/system:page-tags/tag/*
@require     http://code.jquery.com/jquery-1.10.2.js
@version     1
==/UserScript==")

(gazelle:essentials)

Now let's skip to the main body of the script and then fill in utility functions afterwards:


(comment
"==UserScript==
@name        Fix SCP Tag Page
@namespace   scp wiki
@description Sort and add descriptions to SCP Tag Pages
@include     http://www.scp-wiki.net/system:page-tags/tag/*
@require     http://code.jquery.com/jquery-1.10.2.js
@version     1
==/UserScript==")

(gazelle:essentials)

...

(progn
  (comment "The progn here serves to simply delineate the main
  action of the greasemonkey script.")
  (var parent      (jQuery "#tagged-pages-list")
       children    [:])
  (comment "Note that in Gazelle we tend to use `var` to
  introduce variables in a block as this is more consistent with
  the underlying Javascript idioms and generates slightly faster
  code.")

  (comment "We hide the parent to minimize reflows in the document object.")
  (.. (jQuery parent) (hide))

  (comment "We want to sort all the entries on the page, so we
  find them, collect them into an array and remove them from the
  dom.")
  (.. (jQuery parent)
      (find ".pages-list-item")
      (each (lambda (index element)
              (match (.. (jQuery element) (find "a") (attr "href") (split "-"))
                     ([: "/scp" n]
                      (.. (jQuery element) (remove))
                      (children.push element))
                     (otherwise undefined)))))

  (comment "Now we sort the entries.  Entries which aren't basic
  SCP pages, of the form `scp-N` we sort to the front of the
  list, otherwise we sort by SCP number.")
  (children.sort
   list-item-comparitor)

  (comment "We now re-append the entries, which have been sorted,
  to the dom in the correct order.")
  (for ((var i 0)
        (< i children.length)
        (set! i (_+ i 1)))
       (.. (jQuery parent) (append [children i])))

  (comment "Finally we loop through the entries again, fetching
  the description of each SCP and adding a node to the DOM for
  each.  We could, of course, do this in the previous step.  We
  separate it out here for clarity.")
  (.. (jQuery parent)
      (find ".pages-list-item")
      (each (lambda (index element)
              (var the-href (.. (jQuery element) (find "a") (attr "href")))
              (match (.. the-href (split "-"))
                     ((or [: "/scp" _] [: "/scp" _ _])
                      (.. (jQuery element)
                          (append (jQuery (_+ "<span>" (get-description-from-link the-href) "</span>")))))
                     (otherwise undefined)))))
  (.. (jQuery parent) (show))

  undefined)

This code is pretty self-explanatory: We use `jQuery` to find the element on the page which contains all the links, which we hide to prevent DOM reflows while we work. Then we query inside that element for all =page-list=items=, which we remove from the DOM in preparation for sorting, appending each to an array.

Then we sort the array using a comparitor we will define momentarily, place the elements back into their container in the right order, and then finally loop through them again, appending the description of each, which we fetch using a function.

Note, as the comments above indicate, that in Gazelle we tend to use var at the top of a block of code to introduce bindings whereas in other Lisps we'd use let. This is a concession to the underlying Javascript environment. Also note that for complex binding jobs we use Gazelle's built-in pattern matcher, match, to bind, destructure and test the types of values. Gazelle's pattern matcher is the mechanism by which the language exposes complex function interfaces like keyword and optional arguments, incidentally: Gazelle function argument lists are just patterns which match against `arguments`.

Let's take a closer look at the match expression used in the function which removes the list elements from the DOM:


(lambda (index element)
    (match (.. (jQuery element) (find "a") (attr "href") (split "-"))
        ([: "/scp" n]
         (.. (jQuery element) (remove))
         (children.push element))
        (otherwise undefined)))

The special form match, which is build into Gazelle, takes as its first argument a value. Each expression after that is a list whose head is a pattern and whose tail is a list of expressions to evaluate if that pattern matches the value. In this case, the value is the url of the link in our element, split on "-".

We have two clauses, the first matches only a two element array whose first element is "/scp" and whose second element is a number. This is the active clause of the match, which removes the element and adds it to our list to be sorted. The second pattern is a catch-all. A symbol pattern matches anything, including undefined, and binds the symbol to the match value. Here we just ignore other elements. So we remove only those elements which link to links like "/scp-10".

It doesn't matter here, but the last form of the matching clause is the value of a match expression.

This is all straightforward. It is now just a matter of filling in our missing function declarations. Note that Gazelle does not forward declare functions the way Javascript does with `function NAME () {}`, so we need to put our function definitions physically before this main block.

list-item-comparitor

The function list-item-comparitor has a nice example of a slightly more advanced use of pattern matching:


(define (list-item-comparitor e1 e2)
  (comment "Given two list item elements, compare their href's
  and return their ordering.")
  (var l1 (.. (jQuery e1) (find "a") (attr "href")))
  (var l2 (.. (jQuery e2) (find "a") (attr "href")))

  (match [: (.. l1 (split "-")) (.. l2 (split "-"))]
         ([: [: "/scp" (call parse-int n1)]
             [: "/scp" (call parse-int n2)]]

          (if (< n1 n2) -1
            (if (=== n1 n2) 0
              (if (> n1 n2) 1))))
         ([: [: something] [: "/scp" n]]
          1)
         (otherwise
          -1)))

Here we extract the links into `l1` and `l2` and then match against the split versions of each string. The first pattern matches an array of arrays of two elements and then used the call pattern to match the result of applying parseInt (denoted parse-int) to n1 and n2. In this case we compare these integers numerically to determine the order.

The second clause matches anything as the first element of the array and an scp link as the second, and says that the SCP link is always bigger than anything else. Finally, we just return -1 in all other cases.

It is worth noting here that pattern matching is an exceptionally clear and expressive idiom but not necessarily a very performant one: the pattern matcher literally checks each of the implications of a given pattern, which can be very many. The point is, if you are profiling for performance, you may wish to start with code generated by the pattern matcher. A less expressive form might give you considerably better performance in contexts where you know the value domain is smaller than all possible values.

Adding Descriptions

We'd like to append a truncated description to each SCP entry on the page. This requires fetching the SCP page and extracting the description.

HOWEVER, doing so every time you visit a tag page is not very good net citizenship, because it would fetch ALL the SCP pages on a tag page each time you visit the page. We like the SCP wiki, we don't want to drown them in requests. So we will use Greasemonkey's build it persistence system to store the values of the description so that we only fetch a given SCP's page once, caching the result. This is much better for us, too, since fetching the pages takes a lot of time and ruins the responsiveness of our script.

Caching the values means that the first time we view a tag page, it will take a long time to find all the links our script hasn't seen before, but all subsequent visits will make no further requests. Eventually we'll cache the whole set of entries and we will be browsing normally.

First we define a fallback system for storing values, which will be useful if we want to debug our script using Firebug:


(define local-cache {})

(if (undefined? [window "GM_getValue"])
    (set! window.GM_getValue
          (lambda (name :- (opt default* undefined))
            (var r [local-cache name])
            (if r r default*)))
  undefined)

(if (undefined? [window "GM_setValue"])
    (set! window.GM_setValue (lambda (name :- (opt value undefined))
                               (set! [local-cache name] value)
                               value))
  undefined)

Next we define a function to extract the description from the html of the page:


(define (get-description-from-html page-html)
  (comment "Given a text representation of the html for an scp
  entry page, retrieve the first 100 characters of the
  description.")
  (var desc-el          (.. (jQuery page-html) (find "strong:contains(Description)"))
       desc-body        (.. (jQuery desc-el) (parent))
       text             (.. (jQuery desc-body) (text)))
  (.. text (trim) (substr (.. "Description" length) 100)))

This is straightforward jQuery code. We load the page into a DOM and query around for the parts we want. The description is identifiable as the text immediately following a strong tag with "Description:" in it. This is the first element in the description section, so we find its parent, get a text representation of the contents and truncate it to 100, chopping of the `Description` bit.

We just need to write a wrapper that fetches the contents of the page and passes it into the above function now.


(define (get-description-from-link url)
  (comment "Given the relative url of an scp page, retrieve the
  description of that SCP from the cache or from the page itself
  via a synchronous request.")
  (match (GM_getValue url)
         (undefined
          (var page undefined)
          (jQuery.ajax
           { url
             url
             async false
             success
             (lambda (data _ _)
               (set! page data))
             error
             (lambda ()
               (set! page undefined))})
          (match page
                 (undefined
                  (GM_setValue url "...")
                  "...")
                 (page-contents
                  (var short-version (get-description-from-html page-contents))
                  (GM_setValue url short-version)
                  (GM_getValue url))))
         (stored-description
          stored-description)))

This function takes our url and either fetches the truncated description from the cache or makes a synchronous request to the site to get the contents, which it uses to extract the description and then set the cache.

Note that in the above we use undefined to match against the undefined value and that we can use curly brace syntax to denote an object literal. This is a new feature of Gazelle.

And that is it!

Browse Responsibly

This script is capable of making a lot of requests to the SCP wiki, so please, please be responsible. I suggest that until your cache fills up you only load a tag page once every hour or less. I wouldn't be releasing this script to the public at all except that I believe Gazelle's user base (probably just me) is so small that it can't possible make a major impact. Anyway, be nice to the internet!


          Оптимизация Linux Mint   

Сегодня я обнаружил что Linux mint 18 стал грузиться медленнее чем Linux mint 17. Что не очень хорошо для моего ноутбука. Я выяснил что это связано с ненужными службами, приложениями, визуальными эффектами. Существует множество оптимизаций, позволяющих повысить скорость работы дистрибутива Linux Mint. В этой статье мы рассмотрим как выполняется оптимизация Linux Mint. Я покажу только максимально безопасные. Проверял их сам… Read more →

Запись Оптимизация Linux Mint впервые появилась Losst.


          Technical Account Manager (Engineer) - High Availability, Inc. - Audubon, PA   
Technical certifications from NetApp, VMware, Cisco and/or EMC. 5+ years enterprise experience with data center technologies such as Windows, Unix, Linux,...
From High Availability, Inc. - Thu, 11 May 2017 06:12:15 GMT - View all Audubon, PA jobs
          Network Engineer - Globus Medical - Audubon, PA   
Windows, Cisco Systems, UNIX, Linux, ESXi. The Network Engineer position oversees the installation, configuration and maintenance of networked information...
From Globus Medical - Mon, 27 Mar 2017 07:24:05 GMT - View all Audubon, PA jobs
          Wikileaks: La CIA desarrolla un malware para el Sistemas Operativos basados en Linux   

Wikileaks está desvelando en el programa Vault 7 en su página web todos los detalles sobre los cientos de herramientas de hackeo que tenía (y tiene) la CIA en su poder para todo tipo de dispositivos. Mientras que ayer conocimos Elsa, un malware para geolocalizar a usuarios a través del WiFi, hoy se ha desvelado Outlaw Country, el malware de la CIA para ordenadores con Linux.
 

Outlaw Country: utilizado para manipular paquetes de red de manera oculta en Linux

Mientras que prácticamente todo lo que Wikileaks ha desvelado de la CIA tiene que ver con malware para ordenadores con Windows, es especialmente destacable que Outlaw Country sea la primera herramienta pensada explícitamente para Linux, que como todos sabemos es mucho más seguro que otros sistemas operativos y parchea mucho más rápido sus vulnerabilidades.
 
Outlaw Country permite redirigir todo el tráfico saliente de un ordenador objetivo a ordenadores controlados por la CIA con el objetivo de poder robar archivos del ordenador infectado, o también para enviar archivos a ese ordenador.

El malware consiste en un módulo de kernel que crea tablas netfilter invisibles en el ordenador objetivo con Linux, con lo cual se pueden manipular paquetes de red. Conociendo el nombre de la tabla, un operador puede crear reglas que tengan preferencia sobre las que existen en las iptables, las cuales no pueden verse ni por un usuario normal ni incluso por el administrador del sistema.
 
Relacionado:https://www.adslzone.net/2017/06/01/es-realmente-mas-seguro-linux-que-windows/
 
Y es que Linux es muy utilizado en servidores, por lo que es un objetivo prioritario para la CIA, que busca infiltrarse de cualquier manera en redes ajenas para realizar labores de espionaje, como hemos visto con otras herramientas como Brutal Kangaroo o Pandemic. 
 
Outlaw Country permite redirigir todo el tráfico saliente de un ordenador objetivo a ordenadores controlados por la CIA con el objetivo de poder robar archivos del ordenador infectado, o también para enviar archivos a ese ordenador.
 
El malware consiste en un módulo de kernel que crea tablas netfilter invisibles en el ordenador objetivo con Linux, con lo cual se pueden manipular paquetes de red. Conociendo el nombre de la tabla, un operador puede crear reglas que tengan preferencia sobre las que existen en las iptables, las cuales no pueden verse ni por un usuario normal ni incluso por el administrador del sistema.
 

El último documento del malware data de 2015

El mecanismo de instalación y persistencia del malware no viene muy bien detallado en los documentos a las que tuvo acceso Wikileaks. Para poder hacer uso de este malware, un operador de la CIA tiene que hacer primero uso de otros exploits o puertas traseras para inyectar el módulo del kernel en el sistema operativo objetivo.
 
 
La versión de Outlaw Country 1.0 contiene un módulo para el kernel de la versión de 64 bits de CentOS/RHEL 6.x. La primera versión de esa rama fue publicada en 2011, mientras que la última fue lanzada en 2013, siendo la última disponible hasta verano de 2014 cuando llegó la versión 7. El módulo sólo funciona con los kernel por defecto, y además, la versión 1.0 del malware sólo soporta DNAT (Destination NAT) a través de la cadena PREROUTING.
 
La versión del documento que Wikileaks ha revelado tiene como fecha el 4 de junio de 2015. En ese mismo documento aparece referenciado como requisito utilizar la versión de CentOS 6.x o anterior, y que tenga como versión de kernel 2.6.32 (del año 2011) o inferior. No se sabe si la herramienta contaba con una versión más actualizada para versiones más recientes.
 
https://www.adslzone.net/2017/06/29/outlaw-country-wikileaks-desvela-malware-de-la-cia-para-linux/
 
------------

          Linux für Pentium M.   
Ein Bekannter braucht zur Zeit ein Notebook als Ersatz, da wollte ich ihm mein Altes leihen. Dummerweise ist da nur ein Pentium M mit 1 GB ram drauf...
          Sr. Software Engineer - ARCOS LLC - Columbus, OH   
Oracle, PostgreSQL, C, C++, Java, J2EE, JBoss, HTML, JSP, JavaScript, Web services, SOAP, XML, ASP, JSP, PHP, MySQL, Linux, XSLT, AJAX, J2ME, J2SE, Apache,...
From ARCOS LLC - Tue, 13 Jun 2017 17:31:59 GMT - View all Columbus, OH jobs
          eCube Systems Announces NXTera 7.0 Modern Agile Middleware for OpenVMS and Linux   

Following the footsteps of NXTware Remote and NXTmonitor, NXTera 7.0 completes the modernization story for OpenVMS with the most language diverse, high performance Agile middleware available.

(PRWeb May 17, 2017)

Read the full story at http://www.prweb.com/releases/2017/05/prweb14321022.htm


          eCube Systems Announces NXTmonitor 9.0 for DevOps and Application Performance Management   

NXTmonitor expands APM features for multi-platform application management and DevOps on Windows, OpenVMS, UNIX and Linux.

(PRWeb April 24, 2017)

Read the full story at http://www.prweb.com/releases/2017/04/prweb14252934.htm


          eCube Systems Announces NXTera 6.5 RPC Middleware Tools in NXTware Remote for OpenVMS and Linux   

NXTera 6.5 additions of FORTRAN, BASIC and C# to COBOL, C/C++, python, perl and java languages making it the most language diverse, high performance RPC middleware available. NXTera 6.5 now makes it easy for OpenVMS and linux users with the same agile development and Web Services environment for their 3GL applications.

(PRWeb August 29, 2016)

Read the full story at http://www.prweb.com/releases/2016/08/prweb13624854.htm


          eCube Systems Extends NXTware Remote Development Platform to Linux   

NXTware Remote, a distributed development platform using Eclipse developed for OpenVMS, now works for the Linux platform.

(PRWeb May 19, 2016)

Read the full story at http://www.prweb.com/releases/2016/05/prweb13418603.htm


          eCube Systems Announces Complete 64 bit Support in NXTera 64   

New Version of NXTera 6.4 makes it possible for legacy Entera applications to access the new 64 bit databases and architectures on Windows, Unix and Linux.

(PRWeb January 23, 2014)

Read the full story at http://www.prweb.com/releases/2014/nxtera64/prweb11500749.htm


          eCube Systems Announces NXTera 6.3 Distributed Middleware Support for Red Hat Enterprise Linux 6   

New Version of NXTera 6.3 makes it possible for legacy Entera applications to run on Red Hat Enterprise Linux 6 with support for SOA integrated JAVA, C, C#, FORTRAN, and COBOL applications.

(PRWeb May 17, 2013)

Read the full story at http://www.prweb.com/releases/2013/5/prweb10691558.htm


          eCube Systems Announces NXTera 6.3 Support for Linux Redhat Enterprise 5   

New Version of NXTera 6.3 makes it possible for legacy Entera applications to run on Linux Redhat Enterprise 5 with support for SOA integrated JAVA, C/C++, FORTRAN, and COBOL applications.

(PRWeb April 17, 2013)

Read the full story at http://www.prweb.com/releases/2013/4/prweb10592935.htm


          Comment on Rider EAP 24 includes performance fixes, F# Interactive by John R   
Is or when will Linux debugging be available?
          Arduino-compatible robot dev kit includes RPi 3 and Tinker Board add-ons   

Husarion unveiled an Arduino-ready “Core2” robotics board for web based prototyping, plus a Linux-ready “Core2-ROS” that adds an RPi 3 or Tinker Board.

San Francisco based robotics firm Husarion, which has previously launched an industrial picker robot called the RoboCore, has gone to Crowd Supply to pitch a new Husarion Core2 prototyping platform for the robotics maker community. The $89 Cortex-M4 based Core2 controller board, which includes an ESP32 WiFi adapter, is also available in a version that runs Linux and Robot Operating System called the Core2-ROS. The ROS version replaces the ESP32 with a WiFi-ready Raspberry Pi 3 or Asus Tinker Board SBC.

Read more


           Six Things to Do to Secure Your Linux System   

Tuesday's Petya slam dunk by the bad guys, which may or may not have been a state sponsored swipe at Ukraine, was only one of several wake-up calls during the last couple of months for the folks taking care of IT security.

At least they should have been wake-up calls, but by the carnage left behind it looks as if a lot of folks have been operating their server rooms on autopilot. Not only were there patches at the ready to plug the vulnerabilities Petya used to do whatever it did (other than the fact that it probably wasn't ransomware, what it did hasn't been entirely sorted out yet), but I've heard credible first hand reports from several largish corporations that didn't have available backups.

Read more


          8 Best Linux Distros For Programming And Developers (2017 Edition)   

Linux-based operating systems are often used by developers to get their work done and create something new. Their major concerns while choosing a Linux distro for programming are compatibility, power, stability, and flexibility. Distros like Ubuntu and Debian have managed to establish themselves as the top picks. Some of the other great choices are openSUSE, Arch Linux, etc.

Read more


          RPi 3-like Le Potato SBC showcases fast Amlogic S905X SoC   

Libre Computers’ $25 to $35 “Le Potato” is an RPi 3 clone that runs Android 7.1 or Linux 4.13 on a quad -A53 S905X. There’s no WiFi, but you get HDMI 2.0.

A Shenzhen based Libre Computer Project from Shenzhen Libre Technology Co. Ltd. has gone to Kickstarter to launch the first of a series of “Libre Computer Boards” called Le Potato. The project has so far received less than $4K toward its $25K all or nothing goal, with the campaign due to finish on July 24. However, if the project doesn’t fund, “we will utilize our other pre-prepared financing option and go directly to retail,” says the company.

Read more


          Ubuntu 17.10 Alpha and New Derivative   
  • Ubuntu 17.10 Alpha 1 Released

    Ubuntu 17.10 "Artful Aardvark" Alpha 1 is now available as the first official development release (sans the daily ISOs) for this upcoming milestone.

  • Ubuntu Linux 17.10 'Artful Aardvark' Alpha 1 now available for download

    There has been tons of Ubuntu news lately, with the death of Unity continuing to be felt in the Linux community. Just yesterday, a company that is one of Ubuntu's biggest proponents -- System76 -- announced it was creating its own operating system using that distribution as a base. While some might see that as bad news for Canonical's distro, I do not -- some of System76's contributions should find their way into Ubuntu upstream.

  • System76 Announces Its Own Linux Distribution Named Pop!_OS

    Linux machine vendor System76 has launched their own operating system named Pop!_OS. Based on Ubuntu GNOME, this new Linux distro’s Alpha version is right now available for download. The first final release of Pop!_OS will be shipped on October 19, 2017. System76 has described it as an operating system built for creators.


          Antergos: User-Friendly Desktop, Fueled by the Power of Arch   

Over the years, Arch Linux has had the misfortune of being maligned as one of the more challenging modern Linux distributions. That’s a shame, because Arch Linux is one of the most solid distributions you’ll find. Nonetheless, new users finding their way over to the official Arch Linux installation guide may choose to return to the likes of Ubuntu or Linux Mint. Now, however, there are other options, due to the release of some very user-friendly takes on the Arch Linux distribution, including Antergos.

Read more


          Devices With Linux: Tesla Cars, 'Internet of Things', Intel Has a New Media SDK for Linux   
  • Tesla starts pushing new Linux kernel update, hinting at upcoming UI improvements

    Albeit being about 6 months late, Tesla finally started pushing the new Linux kernel update to the center console in its vehicles this week.

    While it’s only a backend upgrade, Tesla CEO Elon Musk associated it with several long-awaited improvements to the vehicle’s user interface. Now that the kernel upgrade is here, those improvements shouldn’t be too far behind.

    Sources told Electrek that the latest 8.1 update (17.24.30), upgraded the Linux Kernel from the version 2.6.36 to the version 4.4.35.

  • Is Ubuntu set to be the OS for Internet of Things?

    The Internet of Things has enjoyed major growth in recent years, as more and more of the world around us gets smarter and more connected.

    But keeping all these new devices updated and online requires a reliable and robust software background, allowing for efficient and speedy monitoring and backup when needed.

    Software fragmentation has already become a significant issue across the mobile space, and may threaten to do so soon in the IoT.

    Luckily, Canonical believes it can solve this problem, with its IoT Ubuntu Core OS providing a major opportunity for manufacturers and developers across the world to begin fully monetising and realising the potential of the new connected ecosystem.

  • What's New in Intel Media SDK 2017 R2 for Embedded Linux

    Among the key features this release enables is the Region of Interest (ROI) for HEVC encoder in constant and variable bitrate modes.

    Developers can now control the compression rate of specific rectangular regions in input stream while keeping the bitrate target. This makes it possible, for example, to reduce compression of the areas where the viewer needs to see more details (e.g. faces or number plates), or to inrease it for the background with complicated texture that otherwise would consume majority of the bandwidth. ROI can also be used to put a privacy mask on certain regions that have to be blurred (e.g. logos or faces).


          Linux Graphics: Mali, Mesa, NVIDIA, and Mir   

          The beefy Dell Precision 7520 DE can out-muscle a growing Linux laptop field   

Project Sputnik has done an admirable job over the years of bringing a "just works" Linux experience to Dell Ultrabooks like the XPS 13 Developer Edition—in fact, we've tested and largely enjoyed those experiences multiple times now. But while the XPS 13 is a great machine that I would not hesitate to recommend for most Linux users, it does have its shortcomings. The biggest problem in my view has long been the limited amount of RAM; the XPS 13 tops out at 16GB. While that's enough for most users, there are those (software developers compiling large projects, video editors, even photographers) who would easily benefit from more.

Read more


          New Kernels and Linux Foundation Efforts   
  • Four new stable kernels
  • Linux Foundation Launches Open Security Controller Project

    he Linux Foundation launched a new open source project focused on security for orchestration of multi-cloud environments.

    The Open Security Controller Project software will automate the deployment of virtualized network security functions — such as firewalls, intrusion prevention systems, and application data controllers — to protect east-west traffic inside the data center.

  • Open Security Controller: Security service orchestration for multi-cloud environments

    The Linux Foundation launched the Open Security Controller project, an open source project focused on centralizing security services orchestration for multi-cloud environments.

  • The Linux Foundation explains the importance of open source in autonomous, connected cars

    Open source computing has always been a major boon to the world of developers, and technology as a whole. Take Google's pioneering Android OS for example, based on the open source code, which can be safely credited with impacting the world of everyday technology in an unprecedented manner when it was introduced. It is, hence, no surprise when a large part of the automobile industry is looking at open source platforms to build on advanced automobile dynamics.


          GNU/Linux Boards: Orange Pi, Le Potato, and Liteboard   
  • Orange Pi Plus 2e OS Installation

    Similar to the Raspberry Pi is the Orange Pi series of single board systems.

    These single boards are not compatible with the Operating System (OS) images for Raspberry Pi. In this article we will cover installing and setting up an OS.

  • New Libre-Focused ARM Board Aims To Compete With Raspberry Pi 3, Offers 4K

    There's another ARM SBC (single board computer) trying to get crowdfunded that could compete with the Raspberry Pi 3 while being a quad-core 64-bit ARM board with 4K UHD display support, up to 2GB RAM, and should be working soon on the mainline Linux kernel.

    The "Libre Computer Board" by the Libre Computer Project is this new Kickstarter initiative, in turn is the work of Shenzhen Libre Technology Co. Through Kickstarter the project is hoping to raise $50k USD. The board is codenamed "Le Potato."

    Le Potato is powered by a quad-core ARM Cortex-A53 CPU while its graphics are backed by ARM Mali-450. Connectivity on the board includes HDMI 2.0, 4 x USB 2.0, 100Mb, eMMC, and microSD. Sadly, no Gigabit Ethernet or USB 3.0. Unlike the Raspberry Pi 3, it also goes without onboard WiFi/Bluetooth.

  • Open spec, sandwich-style SBC runs Linux on i.MX6UL based COM

    Grinn and RS Components unveiled a Linux-ready “Liteboard” SBC that uses an i.MX6 UL LiteSOM COM, with connectors compatible with Grinn Chiliboard add-ons.

    UK-based distributor RSA Components is offering a new sandwich-style SBC from Polish embedded firm Grinn. The 60-Pound ($78) Liteboard, which is available with schematics, but no community support site, is designed to work with the separately available, SODIMM-style LiteSOM computer-on-module. The LiteSOM sells for 25 Pounds ($32) or 30 Pounds ($39) with 2GB eMMC flash. It would appear that the 60-Pound Liteboard price includes the LiteSOM, but if so, it’s unclear which version. There are detailed specs on the module, but no schematics.


           Linux Rolls Out to Most Toyota and Lexus Vehicles in North America   

In his keynote presentation at Automotive Linux Summit, AGL director Dan Cauchy proudly announced that the 2018 Toyota Camry will offer an in-vehicle infotainment system based on AGL’s UCB.
The Linux Foundation

At the recent Automotive Linux Summit, held May 31 to June 2 in Tokyo, The Linux Foundation’s Automotive Grade Linux (AGL) project had one of its biggest announcements in its short history: The first automobile with AGLs open source Linux based Unified Code Base (UCB) infotainment stack will hit the streets in a few months.

In his ALS keynote presentation, AGL director Dan Cauchy showed obvious pride as he announced that the 2018 Toyota Camry will offer an in-vehicle infotainment (IVI) system based on AGL’s UCB when it debuts to U.S. customers in late summer. Following the debut, AGL will also roll out to most Toyota and Lexus vehicles in North America.

Read more


          Review: IPFire as a home router   

IPFire is a Linux distribution that is focused on delivering a starting point for a router and firewall solution with a web interface. It can be made to do a whole lot, but it may not be the best fit for the needs of a home network.

I’ll not go in to performance testing at all in this review as this will vary based on your hardware. You can use any x86_64 (or armv5tel) system with at least two Ethernet ports. You may need a third Ethernet port if you want to use an external wireless access point rather than configuring the box you want to use with IPFire as your access point.

Read more


          Nomad desktop – You’ll never walk alone   

If you’ve been using Linux for a while, you have probably heard or even played with various desktop environments; Unity, Gnome, Plasma, Xfce, Cinnamon, and others. A personal quest of finding the most suitable interface between YOU and the system. But I bet you half a shilling you have not yet had a chance to experiment with Nomad.

Nomad Desktop is the face of a new Linux distribution named Nitrux. The naming choice is a little tricky, because Nomad is already heavily used to brand a range of software products, and the domain name for Nitrux has the magical NX combo in there. But it does look very interesting. The JS-heavy homepage offers a lot of visual candy, the screenshots are shiny, and Nitrux aims to carve its own niche in a small world saturated with desktop environments. The backbone behind this effort relies on Ubuntu and Plasma and cutting-edge Qt5 solutions, similar to KDE neon. Let’s see what it does.

Read more


          Wikileaks Reveals CIA Malware that Hacks & Spy On Linux Computers   
WikiLeaks has just published a new batch of the ongoing Vault 7 leak, this time detailing an alleged CIA project that allowed the agency to hack and remotely spy on computers running the Linux operating systems. Dubbed OutlawCountry, the project allows the CIA hackers to redirect all outbound network traffic on the targeted computer to CIA controlled computer systems for exfiltrate and

          Full Circle Magazine #122   
This month: * Command & Conquer * How-To : Python (Arduino), Intro To FreeCAD, and Installing UBports * Graphics : Inkscape & Kdenlive * [NEW!] Researching With Linux * Linux Labs: * Review: Etcher * Ubuntu Games: Siltbreaker Act 1 plus: News, Q&A, My Desktop, and soooo much more. Get it while it’s hot! http://fullcirclemagazine.org/issue-122
          System76 Announces Its Own Linux Distribution Named Pop!_OS   
Linux machine vendor System76 has launched their own operating system named Pop!_OS. Based on Ubuntu GNOME, this new Linux distro’s Alpha version is right now available for download. The first final release of Pop!_OS will be shipped on October 19, 2017. System76 has described it as an operating system built for creators. System76 is known
          NAS4Free 11.1.0.4.4421   
NAS4Free ist eine kostenlose Linux-Distribution, welche auf FreeBSD basiert. Zu den Features zählen unter anderen Vollständiges Management über ein Webinterface Software-Raid (0,1,5) und optionale Dateiverschlüsselung Dateisysteme: ZFS v5000 (Feature Flag), UFS, Ext2/3, FAT, NTFS Partitionen MBR und GPT Zahlreiche Netzwerkprotokolle wie CIFS/SMB (Samba v4.x), FTP, NFS, TFTP, AFP, RSYNC, Unison, SCP (SSH), iSCSI (Initiator und Target), HAST (Highly...
          Twain desde php   

Twain desde php

Respuesta a Twain desde php

Hola Rafael, anda al enlace http://sourceforge.net/projects/phpsane/ , y analiza el codigo, es sencillo, lo que hace es usar el comando scanimage de linux, armas los parametros y se los envias desde php mediante la linea de comandos. Espero haber podido ayudar. SALUDOS.

Publicado el 04 de Mayo del 2013 por Ronny

          ADMINISTRADOR/A DE SISTEMAS LINUX - DIA - Las Rozas de Madrid, Madrid   
España, Portugal, Brasil, Argentina, y China, operamos con una extensa red de tiendas que ronda los 7.000 establecimientos ya sean éstos propios o franquicias....
De DIA - Fri, 19 May 2017 15:21:53 GMT - Ver todo: empleo en Las Rozas de Madrid, Madrid
          LinuxCon Europe 2011 – transmisja na żywo   

Dla tych, co nie mogą być w tym roku na konferencji LinuxCon Europe 2011, która ma aktualnie miejsce w Pradze na terenie

Post LinuxCon Europe 2011 – transmisja na żywo pojawił się poraz pierwszy w OSWorld.pl.


          LinuxCon: teraźniejszość i przyszłość Linuksa   

W dniach 17 – 19 sierpnia 20111 w Vancouver, Kanada odbywa się konferencja LinuxCon 2011. Jest to dobra okazja do refleksji na

Post LinuxCon: teraźniejszość i przyszłość Linuksa pojawił się poraz pierwszy w OSWorld.pl.


          LinuxCon 2009 już we wrześniu   

Już 21 września odbędzie się konferencja LinuxCon 2009 w Portland (Oregon, USA). Wykłady będą odbywać się w pięknych salach w hotelu Portland

Post LinuxCon 2009 już we wrześniu pojawił się poraz pierwszy w OSWorld.pl.


          LibreOffice 5.3.4.2   
LibreOffice is the power-packed free, libre and open source personal productivity suite for Windows, Macintosh and GNU/Linux, that gives you six feature-rich applications for all your document production and data processing …
          Qualcomm Snapdragon 450 για mid-range smartphones/tablets   
Συνέχεια στις ανακοινώσεις της Qualcomm από τo MWC 2017 Shanghai, με την εταιρεία να παρουσιάζει το νέο SoC Qualcomm Snapdragon 450 για mid-range smartphones/tablets, αλλά και το Snapdragon Wear 1200 για φορετές συσκευές. 
Το Qualcomm Snapdragon 450 αποτελεί τον διάδοχο του Snapdragon 435 και είναι κατασκευασμένο με τη διαδικασία FinFET σε κλίμακα 14nm. Το νέο SoC φέρνει βελτίωση 25% στην ταχύτητα του επεξεργαστή (CPU) και του επεξεργαστή γραφικών (GPU), τουλάχιστον 4 ώρες μεγαλύτερη αυτονομία με την ίδια μπαταρία και τον ίδιο χρόνο φόρτισης, αλλά και 30% μεγαλύτερη εξοικονόμηση ενέργειας κατά τη διάρκεια του mobile gaming. Επιπλέον, το Snapdragon 450 είναι το πρώτο της σειράς 400 που υποστηρίζει σύστημα διπλής κάμερας (έως 13MP + 13MP) με δημιουργία bokeh effect σε πραγματικό χρόνο ή μονή κάμερα έως 21MP. 
Να αναφέρουμε, επίσης, ότι υποστηρίζει την τεχνολογία ταχύτατης φόρτισης Quick Charge 3.0, υποδοχές USB Type-C 3.0 και διαθέτει το X9 LTE modem για ταχύτητες download έως 300mbps Σε ό,τι αφορά το Qualcomm Wear 1200 είναι κατασκευασμένο στα 28nm και υποστηρίζει το νέο πρότυπο LTE IoT, το οποίο μπορεί να μην προσφέρει τις ταχύτητες του 4G LTE, αλλά επιτρέπει την επικοινωνία με ταχύτητα έως 300kbps καταναλώνοντας ελάχιστη ενέργεια. Με αυτόν τον τρόπο οι κατασκευαστές θα μπορούν να δημιουργήσουν φορετές συσκευές και IoT συσκευές με μεγάλη αυτονομία. Χαρακτηριστικά αναφέρουν ότι μια μπαταρία 350mAh θα μπορεί να αντέξει για σχεδόν 10 ημέρες σε αναμονή. Κατά τ’ άλλα, υποστηρίζει GPS/GLONASS, VoLTE και Linux-based λειτουργικά συστήματα.
          Importante vulnerabilidad remota en Linux a través de systemd   
Se ha anunciado una grave vulnerabilidad que afecta a varias distribuciones Linux que puede permitir a un atacante remoto provocar condiciones de denegación de servicio o ejecutar código arbitrario.

El investigador Chris Coulson de Canonical (desarrolladores de Ubuntu) ha anunciado una vulnerabilidad de escritura fuera de límites en systemd-resolved. Un servidor DNS malicioso puede aprovechar la vulnerabilidad con el envío de una respuesta TCP especialmente diseñada para engañar systemd-resolved, de forma que determinados tamaños pasados a dns_packet_new pueden hacer que asigne un búfer demasiado pequeño y posteriormente escribir datos arbitrarios más allá del final del mismo. El atacante podría conseguir provocar la caída del sistema o ejecutar código arbitrario.


Según confirmaUbuntu la vulnerabilidad, con CVE-2017-9445, se introdujo en systemd en la versión 233 en 2015.
Ubuntu ha publicado actualizaciones del sistema que corrigen el problema:
Ubuntu 17.04:
Ubuntu 16.10:
O simplemente actualizando con:
       $ sudo apt-get update
       $ sudo apt-get dist-upgrade

Systemd fue creado por desarrolladores de RedHat y también se ha incluido en otras distribuciones Linux como Debian, openSUSE, Ubuntu, Arch Linux, SUSE Linux Enterprise Server, Gentoo Linux, Fedora o CentOS.

Debian por su parte confirmaque esta vulnerabilidad afecta a Debian Stretch y Buster (aunque no a Debian Wheezy y Jessie). Red Haty SUSEconfirman que no se ven afectadas.

Más información:

CVE-2017-9445: Out-of-bounds write in systemd-resolved with crafted TCP payload

USN-3341-1: Systemd vulnerability
Ubuntu Security Notice USN-3341-1

Canonincal CVE-2017-9445

Debian CVE-2017-9445

Red Hat CVE-2017-9445

SUSE CVE-2017-9445


Antonio Ropero
Twitter: @aropero



          What's New in Intel® Media SDK 2017 R2 for Embedded Linux*   

Smart encoding improvements for embedded vision solutions

Intel® Media SDK provides a cross-platform API for developing media applications on Windows* and Embedded Linux*. And the new release,... Read more >

The post What's New in Intel® Media SDK 2017 R2 for Embedded Linux* appeared first on Blogs@Intel.


          Installing 3CX on Linux Version: Debian 9 Stretch   

As you may have already heard, Debian released a new Linux version of their popular operating system “Stretch”. Those who tried to install 3CX on Linux version Stretch, quickly realised that the installation failed with a dependency error. Well, the reason is simple – 3CX repository was created to be compatible ... [+]

The post Installing 3CX on Linux Version: Debian 9 Stretch appeared first on 3CX.


          在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux   




在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
python

在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
Hadoop

在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
Java

在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
培训

在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
linux

在 Linux 中使用 shell 脚本自动创建 / 移除并挂载交换文件Linux
脚本



几天前我们写了一篇关于在 Linux 中 3 种创建交换文件的方法,它们是常见的方法,但是需要人工操作。


今天我发现了一个 Gary Stafford 写的 shell 小脚本(两个 shell 脚本,一个用于创建交换文件,另外一个用于移除交换文件),它可以帮助我们在 Linux 中创建 / 移除并且自动挂载交换文件。



默认这个脚本创建并挂载 512MB 的交换文件。如果你想要更多的交换空间和不同的文件名,你需要相应地修改脚本。修改脚本不是一件困难的事,因为这是一个容易上手而且很小的脚本。



如何检查当前交换文件大小


使用 free 和 swapon 命令检查已经存在交换空间。


$ free -h

total used free shared buff/cache available

Mem: 2.0G 1.3G 139M 45M 483M 426M

Swap: 2.0G 655M 1.4G


$ swapon --show

NAME TYPE SIZE USED PRIO

/dev/sda5 partition 2G 655.2M -1



上面的输出显示我当前的交换空间是 2GB。


创建交换文件


创建 create_swap.sh 文件并添加下面的内容来自动化交换空间的创建和挂载。


$ nano create_swap.sh

#!/bin/sh


# size of swapfile in megabytes

swapsize=1024


# does the swap file already exist?

grep -q "swapfile" /etc/fstab


# if not then create it

if [ $? -ne 0 ]; then

echo 'swapfile not found. Adding swapfile.'

fallocate -l ${swapsize}M /swapfile

chmod 600 /swapfile

mkswap /swapfile

swapon /swapfile

echo '/swapfile none swap defaults 0 0' >> /etc/fstab

else

echo 'swapfile found. No changes made.'

fi


echo '--------------------------------------------'

echo 'Check whether the swap space created or not?'

echo '--------------------------------------------'

swapon --show


给文件添加执行权限。


$ sudo +x create_swap.sh


运行文件来创建和挂载交换文件。


$ sudo ./create_swap.sh


swapfile not found. Adding swapfile.

Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)

no label, UUID=d9004261-396a-4321-a45f-9923e3e1328c

--------------------------------------------

Check whether the swap space created or not?

--------------------------------------------

NAME TYPE SIZE USED PRIO

/dev/sda5 partition 2G 954.1M -1

/swapfile file 1024M 0B -2

你可以看到新的 1024M 的 swapfile。重启系统以使用新的交换文件。


移除交换文件


如果不再需要交换文件,接着创建 remove_swap.sh 文件并添加下面的内容来移除交换文件以及它的 /etc/fstab 挂载点。

$ nano remove_swap.sh

#!/bin/sh


# does the swap file exist?

grep -q "swapfile" /etc/fstab


# if it does then remove it

if [ $? -eq 0 ]; then

echo 'swapfile found. Removing swapfile.'

sed -i '/swapfile/d' /etc/fstab

echo "3" > /proc/sys/vm/drop_caches

swapoff -a

rm -f /swapfile

else

echo 'No swapfile found. No changes made.'

fi


echo '--------------------------------------------'

echo 'Check whether the swap space removed or not?'

echo '--------------------------------------------'

swapon --show


并给文件添加可执行权限。


$ sudo +x remove_swap.sh


运行脚本来移除并卸载交换文件。


$ sudo ./remove_swap.sh


swapfile found. Removing swapfile.

swapoff: /dev/sda5: swapoff failed: Cannot allocate memory

--------------------------------------------

Check whether the swap space removed or not?

--------------------------------------------

NAME TYPE SIZE USED PRIO

/dev/sda5 partition 2G 951.8M -1


欢迎加入本站公开兴趣群

软件开发技术群

兴趣范围包括:Java,C/C++,Python,php,Ruby,shell等各种语言开发经验交流,各种框架使用,外包项目机会,学习、培训、跳槽等交流

QQ群:26931708


Hadoop源代码研究群

兴趣范围包括:Hadoop源代码解读,改进,优化,分布式系统场景定制,与Hadoop有关的各种开源项目,总之就是玩转Hadoop

QQ群:288410967


          Senior Java Developer   
PR-Dorado, Senior Java Software Developer We are a fast growing B2B ecommerce company with an innovative technology platform. We're adding to our core team and are looking for a dedicated, experienced professional Java developer. Skills required: Extensive experience with Linux and Java Familiarity and/or proficiency in other languages including: PHP C/C+/C# Ruby Python Perl Ability to code and test C# on Wi
          Linux fait de l'humour   
none
          Comment on Yesterday’s Copy Protection Schemes – SimCity by Melodee   
Skype has opened its online-dependent customer beta to the world, after launching it broadly within the United states and You.K. before this 30 days. Skype for Web also now can handle Chromebook and Linux for immediate text messaging communication (no video and voice yet, individuals need a connect-in installment). The expansion in the beta adds help for an extended listing of languages to help you strengthen that global usability
          What Excites Me The Most About The Linux 4.12 Kernel   
If all goes according to plan, the Linux 4.12 kernel will be officially released before the weekend is through. Here's a recap of some of the most exciting changes for this imminent kernel update...
          AMD Silently Updates AMDGPU-PRO 17.10 Linux Driver   
AMD has silently pushed out an updated AMDGPU-PRO 17.10 driver but the changes are unknown...
          The First Radeon Vega Frontier Linux Benchmark Doesn't Tell Much   
We have some OpenGL numbers for Radeon Vega Frontier Edition on AMDGPU-PRO under Linux...
          Ubuntu vs. Fedora vs. openSUSE vs. Manjaro vs. Clear Linux On Intel's Core i9 7900X   
Continuing on with our Core i9 7900X Linux benchmarking this week are some numbers when testing this ten-core Skylake-X processor on various Linux distributions under an array of different workloads. Tested for this roundup was Ubuntu 16.04.2 LTS, Ubuntu 17.04, Fedora 26, openSUSE Tumbleweed, Manjaro 17.0.2, and Clear Linux 16160.
          MSI DS502 USB Gaming Headset Works On Linux   
When buying the MSI X299 SLI PLUS for our initial X299 + Intel Core X Series Linux benchmarking from NewEgg it came with the MSI DS502 Gaming Headset as a free gift. Curiosity got the best of me today, and it actually works just fine under Linux...
          NVIDIA 384.47 On Linux Brings Some Vulkan Speed Boosts   
Today NVIDIA released their first 384 series Linux driver beta and for the occasion I fired up some fresh OpenCL / Vulkan / OpenGL benchmarks in seeing if there are any performance changes for users to see with this new series that will eventually succeed the 381.22 stable release...
          Micro Machines World Series Debuts With Linux Support   
Micro Machines World Series rolled out today and it's greeted by same-day Linux support...
          AMDGPU-PRO 17.20 Benchmarking vs. RadeonSI/RADV   
Thanks to this week's Radeon Vega Frontier Edition launch, AMD pushed out a new build of their hybrid driver stack for Linux, AMDGPU-PRO. This new release is marketed as AMDGPU-PRO 17.20 and is only found when looking for the Frontier driver, but it's been working out fine so far in my Polaris/Fiji GPU testing. Here are some benchmarks compared to their current stable series, AMDGPU-PRO 17.10, as well as the newest open-source AMDGPU+RadeonSI/RADV driver stack.
          NVIDIA Rolls Out The 384 Linux Driver Series Into Beta   
NVIDIA has today announced the 384.47 beta driver for Linux, which succeeds their current 381 short-lived stable release series...
          Comment on Furniture Makeover: DIY Chalk Painted Wood File Cabinet by Lucie   
Skype has launched its online-based consumer beta for the entire world, after introducing it generally inside the U.S. and You.K. previously this four weeks. Skype for Web also now works with Chromebook and Linux for instant messaging communication (no voice and video but, those need a plug-in installment). The increase of your beta adds help for an extended selection of languages to help you bolster that international user friendliness
          Mozilla Firefox Version 54.0.1 Released   

FirefoxMozilla sent Firefox Version 54.0.1 to the release channel today.  Firefox ESR was updated to version 52.2.1. 

The update includes a number of bug fixes.

The next scheduled release is August 8, 2017 (5 week cycle with release for critical fixes as needed).

Fixes:

To get the update now, select "Help" from the Firefox menu, then pick "About Firefox."  Mac users need to select "About Firefox" from the Firefox menu. If you do not use the English language version, Fully Localized Versions are available for download.

References




Remember - "A day without laughter is a day wasted."
May the wind sing to you and the sun rise in your heart...

          Mozilla Firefox   
Mozilla FirefoxMozilla Firefox has been released and is now available for Windows, Mac, and Linux in more than 70 languages. This release represents the hard work, dedication, and perseverance of thousands of contributors throughout the Mozilla community and around the world. Firefox has a huge number of...
[Web Browser]
          Comment on Open a Men’s Hair Salon Franchise with Men’s Hair House by Www.Denverlinux.Com   
It works very well for me
          Exit statusのセマンティクス   

*nixのexit statusのセマンティクスについてかつて質問して,答えていてもらっていたことを思い出したので記します.

moznion   [5:46 PM] signal受け取ってexitする時,そのsignalの値をそのままexit codeに使う,みたいなお作法みたいなのってあるんでしたっけ
takesako  [5:50 PM] ないと思いますー
moznion   [5:50 PM] 特に無いんですねえ,ありがとうございます
songmu    [5:51 PM] なんか、ラッパースクリプトとか書くときは、ものによるけど維持するように気をつけることとかある。
          [5:51 PM] horensoとかは維持するようにしてたはず。
moznion   [5:52 PM] なんかそこら辺はお行儀みたいな感じですかねえ
songmu    [5:53 PM] 上位でどのシグナルで殺されたかとか判断したいかどうか、とかかなぁ。
xaicron   [5:53 PM] eixt code はアプリ自体で定義しているものを出すってことに決めればいいと思ってる派
          [5:53 PM] コマンドラインだったら成功か失敗かぐらいしかほとんどユースケース無い気がする。あとはログを出そう
moznion   [5:55 PM] まあですよねえ
          [5:55 PM] 利用者がexit codeで挙動変えるような使い方をしてるかも知れないから,ラッパーであればcodeを維持するみたいな理念だと察しました
songmu    [5:55 PM] そですね
moznion   [5:57 PM] ログを出しておいたばかりにそれを利用者側に正規表現で引っ掛けられて挙動を分岐させられるのは起こりえそうですが知ったことではない
hirose31  [6:09 PM] exit code、意味あるで
tokuhirom [6:12 PM] どこがダジャレになってるのか気になっている
hirose31  [6:12 PM] w
          [6:12 PM] http://www.unix.com/man-page/all/3/sysexits/ とか /usr/include/sysexits.h とか。
          [6:13 PM] sendmailとかは、aliasesで呼ばれてるフィルタプログラムが exit 75 (EX_TEMPFAIL) すると再実行したりするで。
cho45     [6:15 PM] 絶妙に中途半端な数字だ
hirose31  [6:16 PM] 「いそのー EX_UNAVAILABLE しようぜー」
kazuho    [6:31 PM] execすればexitコード一致問題なくなるで
yappo     [6:52 PM] ひどいw
moznion   [7:31 PM] exit code割と無自覚に使ってたもんで,なんか紳士協定とかがあるのか気になったという次第でした
mattn     [7:32 PM] http://linuxjm.osdn.jp/html/LDP_man-pages/man3/exit.3.html
          [7:33 PM] https://ja.wikipedia.org/wiki/%E7%B5%82%E4%BA%86%E3%82%B9%E3%83%86%E3%83%BC%E3%82%BF%E3%82%B9
          [7:33 PM] こっちか
          [7:33 PM] 普段 0/1 だけど usage 出す時に変えたりしますね
moznion   [7:34 PM] fsckって論理和でステータス変わるのか……
          [7:34 PM] ですね,僕もそういう感じで使ってました

          Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!! - Jelenlegi ára: 1 Ft   
Univerzális USB 2. 0 Memória kártya-olvasó
480 Mb/s sebesség
Kompatibilis: USB 1. 1 és 2. 0
4 kártya foglalat
Támogatott kártya típusok: Micro MS/ M2/ SD/ MMC/ SDHC/DV/MS DUO/ MS PRO DUO/ Micro SD/T-Flash
Kompatibilis operációs rendszerek: Windows 7/VISTA/XP/2000/ME/98SE/98, Mac OS X 9. 0 és Linux 2. 4 vagy újabb verziói
Vékony kialakítás, kompakt méret
Plug & Play: behelyezés után egyből működik
Támogatott kártyaméret: max 32 Gb
Anyaga: műanyag
Méret: 66 x 21 x 16 mm
Súly: 14 g
Szín: véletlenszerű
A termékek külföldről érkeznek, emiatt a szállítási idő 15-30 munkanap, kérem mindenki ennek tudatában licitáljon!
Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!!
Jelenlegi ára: 1 Ft
Az aukció vége: 2017-07-01 23:03
          Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!! - Jelenlegi ára: 1 Ft   
Univerzális USB 2. 0 Memória kártya-olvasó
480 Mb/s sebesség
Kompatibilis: USB 1. 1 és 2. 0
4 kártya foglalat
Támogatott kártya típusok: Micro MS/ M2/ SD/ MMC/ SDHC/DV/MS DUO/ MS PRO DUO/ Micro SD/T-Flash
Kompatibilis operációs rendszerek: Windows 7/VISTA/XP/2000/ME/98SE/98, Mac OS X 9. 0 és Linux 2. 4 vagy újabb verziói
Vékony kialakítás, kompakt méret
Plug & Play: behelyezés után egyből működik
Támogatott kártyaméret: max 32 Gb
Anyaga: műanyag
Méret: 66 x 21 x 16 mm
Súly: 14 g
Szín: véletlenszerű
A termékek külföldről érkeznek, emiatt a szállítási idő 15-30 munkanap, kérem mindenki ennek tudatában licitáljon!
Univerzális memória-kártya olvasó 1 Ft NMÁ!!!!!!!!
Jelenlegi ára: 1 Ft
Az aukció vége: 2017-07-01 23:03
          Linux Administrator/ Systems Administrator   
NM-Albuquerque, Job description: Key responsibilities include performing general server administration tasks, monitoring and optimizing system performance and reliability, operational workflow development, and managing enhancements/upgrades and providing - Various levels of support. Develops and maintain system documentation for Lab/Data Center configuration and customizations. Lead Technical efforts for software
          Mozilla Firefox 54.0.1   
Firefox 54 steht ab sofort für Windows, Linux und OS X zum Download bereit.
          Imprimanta Laser Monocrom HP M400 401DN, 35 ppm, Duplex, Retea, USB, 1200 x 1200, A4, Cartus nou 2.7K   
Prezentare
Format A4
Functii
Imprimanta DA
Imprimare Duplex DA
Imprimanta
Tehnologie Laser
Mod tiparire Monocrom
Viteza de printare alb/negru (ppm) 35
Rezolutie la printare (dpi) 1200 x 1200
Memorie RAM standard (MB) 256
Procesor (MHz) 800
Interfata USB si RJ-45
Consumabile
Tip consumabil Cartus toner (Cartus Nou 2700 de pagini inclus)
Consumabile CF280A - 2700 pag
CF280x - 6900 pag
Alimentare hartie 250
Altele
Dimensiuni (W x D x H) 364.6 x 368 x 271
Sistem de operare Windos
Linux
Mac OS
* Imprimantele se livreaza cu cartus, foaie de test si cablu de alimentare
**Cablurile de date, alimentare si perifericele sunt bonus in limita stocurilor disponibile!
 

Price: 528 Lei Special Price: 517 Lei


          Developer - Java, AWS, Puppet, Chef, Rest, NoSQL, Subversion, Hudson, Linux   

          Hire a Linux Developer with knowledge about kernel level rootkits functionality by aalesh28   
Embedded C based project on a 32 bit Linux processor. Someone with the knowledge of how the ROP (return oriented programming) is done and shellcodes. Not a major project, have done most of the work, just need help with a little bit of trouble shooting... (Budget: $30 - $250 USD, Jobs: C Programming, Embedded Software, Linux, Software Development, Ubuntu)
          零基础如何自学MySQL数据库?   
本人是个活生生的例子,普通大学三本,大学学的仪器仪表专业,12年毕业后第一份工作是电路板测试。由于项目中接触到了数据库的东西,纯粹当高级点的excel表格用的。当时有点兴趣,没参加过培训,0基础自学mysql和linux,现任国内某公有云mysql &&mongodb dba。对于非计 ...
          06/29 Linuxfx 8.0   
none
          How to Kick People off Your Wifi Network with Aireply-ng?   
In this tutorial, I assume that you are familiar with linux Terminal and basic command line. Step 1.Make sure your wireless card in “Monitor mode”. Type “iwconfig your_wireless_card”. You can turn off the wireless card, switch to ‘Monitor Mode’ and turn it on again. I prefer to use ‘airmong-ng’ to create a virtual wireless network. […]
          Telegram Desktop 1.1.9 Win/Linux/Mac + Portable مسنجر تلگرام   
Telegram Desktop نسخه ویندوز یکی از جدیدترین مسنجرهای سریع، ساده و ایمن برای ارسال پیام در اندروید و سایر پلتفرم های موبایل میباشد ، که مشابه اپلیکیشن WhatsApp اما دارای امکانات بیشتر میباشد. تماس صوتی ، ساخت کانال ، گروه و ارسال ویدئو ، عکس و فایل تا حجم 1 گیگابایت و شکلکهای متنوع و رمزگذاری اطلاعات شما از امکانات این نرم افزار میباشد.
          System Administrator - Linux - State of Colorado Job Opportunities - Golden, CO   
Coordinates and interacts with other CCIT staff, particularly front-line user support staff (e.g. CCIT provides support for enterprise, departmental and...
From State of Colorado Job Opportunities - Mon, 26 Jun 2017 19:34:27 GMT - View all Golden, CO jobs
          富士通とSUSE、ミッションクリティカル領域のSUSE運用者向けサポートサービスを開始   

富士通とSUSEは、「SUSE Enterprise Linux Server」を利用する顧客に向けて、プレミアムサポートサービス「SUSE Business Critical Linux」の提供を開始する。サポート期間を最長8年まで拡大し、24時間サポート体制を敷く。


          【 top 】コマンド――実行中のプロセスをリアルタイムで表示する   

本連載は、Linuxのコマンドについて、基本書式からオプション、具体的な実行例までを紹介していきます。今回は、実行中のプロセスの情報をリアルタイムで表示する「top」コマンドです。


          Sr Engineer II, SW - Android Systems - Linux Kernel Internals - Device Driver   
HARMAN International - Bangalore, Karnataka - Position Summary: Work on Android code base for customizing for Car Infotainment Systems based on the requirements. Work on core Android frameworks for feature completion, .performance and stability improvements. Job Responsibilities: Porting, enhancing and customizing exist...
          Project Leader - Embedded Device Driver Design and Development   
Mistral Solutions Pvt. Ltd. - Bangalore, Karnataka - for this job. Redirect to company website Project Leader - Embedded Device Driver Design and Development Mistral Solutions Pvt. Ltd. 8 to 12 yrs... Experience is a must BSP & Device driver Development - Embedded Linux/Android Android HAL & Device driver development/porting Preferable Skill...
          Module Leads - Embedded Linux Device Driver Development (BSP)   
Mistral Solutions Pvt. Ltd. - Bangalore, Karnataka - for this job. Redirect to company website Module Leads - Embedded Linux Device Driver Development (BSP) Mistral Solutions Pvt. Ltd. 4 to 8 yrs... & power management Experience in debugging device drivers Relevant Domain Experience is a must BSP & Device driver Development - Embedded Linux...
          APT 1.5~alpha4 duyuruldu   
1.5~alpha3 sürümü bugün duyurulan Debian GNU/Linux ve Debian tabanlı GNU/Linux dağıtımlarının paket yönetim sistemi olan APT’nin yeni 1.5 sürümünün dördüncü alphası, Julian Andres Klode tarafından duyuruldu. Resmi duyurusu yapılan sürüm henüz indirilmek üzere yansılarda yerini almamış görünüyor. Sürümün çeşitli yenilikler ve hata düzeltmeleriyle geldiğini bildiren Klode; yeni sürüm hakkında daha geniş bilgi edinmek için debian.org […]
          APT 1.5~alpha3 duyuruldu   
Debian GNU/Linux ve Debian tabanlı GNU/Linux dağıtımlarının paket yönetim sistemi olan APT’nin yeni 1.5 sürümünün üçüncü alphası, Julian Andres Klode tarafından duyuruldu.Resmi duyurusu yapılan sürüm henüz indirilmek üzere yansılarda yerini almamış görünüyor. Sürümün çeşitli yenilikler ve hata düzeltmeleriyle geldiğini bildiren Klode; yeni sürüm hakkında daha geniş bilgi edinmek için debian.org sayfasının ziyaret edilebileceğini söyledi. Debian […]
          BSP/Driver   
Spectrosign Software Solutions Pvt.Ltd - Hyderabad, Telangana - Secunderabad, Telangana - for this job. Redirect to company website BSP/Driver Spectrosign Software Solutions Pvt.Ltd 4 to 8 yrs As per Industry Standards Hyderabad/ Secunderabad... to compile, run and tweak Linux kernel for MIPS/PowerPC platforms Awareness of the Linux kernel and device driver programming. Exposure...
          Sylpheed 3.6.0 duyuruldu   
Zarif arayüzüyle basit, hafif ama özellikli ve kolay kullanımlı e-posta istemcisi olan Sylpheed’in 3.6.0 kararlı sürümü duyuruldu. Bir hesapta birden çok imzayı kullanma özelliğiyle kullanıma sunulan yeni sürüm, bazı UI iyileştirmeleri ve hata düzeltmeleri içeriyor.  Kolay yapılandırma ve sezgisel çalışma özelliğine sahip Sylpheed; klavye odaklı çalışma için tasarlanmış bir yazılım ve Linux, BSD, Mac OS […]
          Require Immediate Job Opportunity for Device Driver Development Bangalore   
Aceline Tech Solutions Pvt. Ltd. - Bangalore, Karnataka - Consulting, Contract Staffing lateral hiring and Software Development Download PPT Photo 1 Job title Immediate Job Opportunity for Device Driver... Development @ Bangalore Job Description 4+ years of experience in Linux/Android OS driver development and board bring up. Good understanding...
          Slackware Live Edition 1.1.8.1 duyuruldu   
Eric Hameleers tarafından çıkarılan Slackware Live Edition’ın 1.1.8.1 sürümü duyuruldu. Slackware 14.2’ye dayalı olarak gelen yeni sürüm, KDE Plasma 5.10.2 masaüstü ortamı içeriyor. Hameleers; 4.9.34 Linux çekirdeği üzerine yapılandırılan sistemin, gcc 7.1.0 ve glibc 2.25 ile birlikte geldiğini ifade etti. Digikam, Calligra ve Krita’nın en son sürümleri eşliğinde Qt 5.9.0 ve Pigma 5.10.2’yi kullanıma sunan sistemin depolarında […]
          Device Driver Developer -wireless   
Gurgaon, Haryana - Job Description Job Description: You will be primarily responsible for development of any S/W and driver for 802.11 wireless modules... & wireless communications systems. Mandatory knowledge of wireless driver architecture and APIs under Linux; under Windows is optional. Mandatory...
          APT 1.5~alpha2 duyuruldu   
Debian GNU/Linux ve Debian tabanlı GNU/Linux dağıtımlarının paket yönetim sistemi olan APT’nin yeni 1.5 sürümünün ikinci alphası, Julian Andres Klode tarafından duyuruldu. Sürümün çeşitli yenilikler ve hata düzeltmeleriyle geldiğini bildiren Klode; yeni sürüm hakkında daha geniş bilgi edinmek için debian.org sayfasının ziyaret edilebileceğini söyledi. Debian GNU/Linux ve Debian tabanlı GNU/Linux dağıtımlarının paket yönetim sistemi olan […]
          Mozilla Firefox 54.0.1 duyuruldu   
Hızlı, işlevsel ve açık kaynak kodlu internet tarayıcısı Mozilla Firefox, 54.0.1 sürümüne güncellendi. Her zamankinden daha hızlı ve daha kararlı bir Firefox sunmak amacıyla üretilen yeni sürümde, önemli hata düzeltmeleri yer alıyor. Çeşitli güvenlik düzeltmeleri içeren yeni sürüm, kimi dil dosyalarının da güncellenmesini içeriyor. GNU/Linux üzerinde Netflix sorununun giderildiği bildirilirken, birden çok sekme açarken meydana gelen […]
          Kernel’in 4.1.42 uzun süreli destek sürümü duyuruldu   
Linux çekirdeği resmi sitesi: https://www.kernel.org Uzun süreli destek sürümü: 4.1.42 2017-06-29 Değişiklik listesi…
          Linuxfx OS 8.0 LTS duyuruldu   
Otomatik donanım algılama ve yapılandırma özellikleri, popüler multimedya codec desteği ve sezgisel bir KDE masaüstü kullanıcı arayüzü ile kullanıma sunulan, Brezilya kökenli ve Ubuntu tabanlı bir dağıtım olan Linuxfx‘in 8.0 LTS sürümü, Rafael Wagner tarafından duyuruldu. KDE Plasma 5 masaüstü ortamıyla gelen sistemin bir uzun süreli destek (LTS) sürüm olduğunu hatırlatan Wagner; sistemin, Microsoft Office belgeleriyle […]
          Kernel’in 4.11.8 kararlı sürümü duyuruldu   
Linux çekirdeği resmi sitesi: https://www.kernel.org En son kararlı çekirdek: 4.11.8 2017-06-29 Değişiklik listesi…
          Comment on Reliably compromising Ubuntu desktops by attacking the crash reporter by Gstreamer – Une faille de sécurité critique corrigée sur Linux Ubuntu / Fedora | UnderNews   
[…] a elle aussi été découverte, par le chercheur irlandais Donncha O’Cearbhaill, qui a publié les détails de l’attaque sur son blog. Cette vulnérabilité affecte l’ensemble des versions d’Ubuntu supérieures à la version […]
          Comment on Reliably compromising Ubuntu desktops by attacking the crash reporter by Serious Ubuntu Linux desktop bugs found and fixed | syslab.com on open source   
[…] Mint, you have a bug to patch. Donncha O’Cearbhaill, an Irish security researcher, found a remote execution bug in Ubuntu. This security hole, which first appeared in Ubuntu 12.10, makes it possible for malicious code to […]
          Software Development Engineer (C++, Linux) - Mentor Graphics - Fremont, CA   
Previous experience in EDA, hierarchy management, large scale system, and experience with Calibre SVRF for DRC is a strong plus....
From Mentor Graphics - Tue, 25 Apr 2017 23:19:16 GMT - View all Fremont, CA jobs
          Software Development Engineer (C,C++, Linux) - Mentor Graphics - Fremont, CA   
Experience with Calibre SVRF for DRC isa strong plus. Software Development Engineer (C,C++, Linux) - 5788....
From Mentor Graphics - Wed, 12 Apr 2017 22:34:46 GMT - View all Fremont, CA jobs
          Docker入门六部曲——Swarm   

原文链接:http://www.itbus.tech/detail.html?id=8738

准备工作

介绍

Docker入门六部曲——服务我们已经足额会了定义服务,并且伸缩服务的容量。

这一篇,你可以看到如何把应用部署到集群中去,运行在多个机器上。Swarm可以帮助我们在多容器,多机器上部署服务。

理解Swarm集群

Swarm就是一个运行着Docker的集群。并且,你还可以使用Docker名来控制这个集群,但是你只能对swarm manager下达命令。集群中的机器可以是真实的物理机,也可以是虚拟机。加入swarm之后,他们都是一个节点。

Swarm manager可以使用不同的策略来运行容器,例如“最空节点”策略——选择使用率最低的节点来运行容器;或者“全局”策略——每一个节点至少有一个镜像的容器。你可以在Compose文件里指定策略。

Swarm manager就是在这个集群中,你可以执行命令,或者授权其他工作节点加入的那个管理节点。工作节点就是那些只提供资源,但不能授权其他节点加入集群的节点。

配置你的swarm

一个swarm集群由很多个机器组成,不管是虚拟机还是物理机都可以。最简单的方式就是执行docker swarm init来开启一个swarm节点,并且把当前执行命令的节点作为管理节点,然后在其他机器上执行docker swarm join来加入这个swarm集群。

本地虚拟机(Mac,Linux,Windows7,Windows8)

为了简单,我们用虚拟机来完成swarm集群配置吧。你需要安装VirtualBox。

然后使用docker-machine来创建几个虚拟机:

$ docker-machine create --driver virtualbox myvm1
$ docker-machine create --driver virtualbox myvm2

这个需要下载镜像,如果网络不好,请耐心等待。

现在,你本地已经有了两个虚拟机了,分别是myvm1和myvm2(可以使用docker-machine ls查看)。我们准备把myvm1作为管理节点,myvm2作为工作节点。

我们可以使用docker-machine ssh来登录虚拟机或者发送命令。我们先把myvm1初始化成管理节点吧:

$ docker-machine ssh myvm1 "docker swarm init"
Swarm initialized: current node <node ID> is now a manager.

To add a worker to this swarm, run the following command:

  docker swarm join \
  --token <token> \
  <ip>:<port>

执行失败,提示要加--advertise-addr
使用docker-machine ls查看虚拟机,然后拷贝myvm1的ip,指定端口2377,例如:

docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100:2377"

初始化成功后,会返回一个提示信息,告诉你怎么加入这个集群。把这个命令拷贝下来,然后去myvm2执行:

$ docker-machine ssh myvm2 "docker swarm join \
--token <token> \
<ip>:<port>"

This node joined a swarm as a worker.

注意:反斜线不能丢。

使用docker-machine ssh myvm1可以登录到myvm1上,使用docker node ls可以查看当前swarm集群中的所有节点:

docker@myvm1:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
brtu9urxwfd5j0zrmkubhpkbd     myvm2               Ready               Active              
rihwohkh3ph38fhillhhb84sk *   myvm1               Ready               Active              Leader

执行exit退出登录,返回原来的机器。当然也可以直接发送命令:

docker-machine ssh myvm1 "docker node ls"

部署应用

你一定没有想到,最难的部分已经被你度过去了。现在我们只需要重复上一篇文章的的步骤就可以了。不过要记住,我们只可以在myvm1上执行dokcer命令,因为他才是管理节点。

把之前的docker-compose.yml拷贝过来。使用docker-machine scp上传到myvm1上:

docker-machine scp docker-compose.yml myvm1:~

我们要开始使用swarm来部署啦!同样还是docker stack deploy

docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"

结束了!

我们来查看一下容器吧:

$ docker-machine ssh myvm1 "docker stack ps getstartedlab"

ID            NAME        IMAGE              NODE   DESIRED STATE
jq2g3qp8nzwx  test_web.1  username/repo:tag  myvm1  Running
88wgshobzoxl  test_web.2  username/repo:tag  myvm2  Running
vbb1qbkb0o2z  test_web.3  username/repo:tag  myvm2  Running
ghii74p9budx  test_web.4  username/repo:tag  myvm1  Running
0prmarhavs87  test_web.5  username/repo:tag  myvm2  Running

注意:DESIRED 和STATE,第一次执行命令部署时,需要从远端仓库下载镜像,所以如果网络不好的话,会过一段时间才会是Running。

访问集群

你可以使用myvm1和myvm2任意一个ip来访问这个集群。这个集群内部的网络是共享的,而且是负载均衡的。可以使用docker-machine ls查看ip。然后在浏览器中打开,不断的刷新,你会看到5个容器的id,因为是负载均衡的。

网络示意图:

伸缩应用

上一篇文章的一样,你只需要修改docker-compose.yml,然后重新执行docker stack deploy就可以了,swarm会自动帮你调整。

你也可以再创建几个虚拟机,然后加入集群。再使用docker stack deploy重新部署,swarm就会把新的节点利用起来了。

清扫战场

如果你想把应用给下掉:

docker-machine ssh myvm1 "docker stack rm getstartedlab"

如果想让工作节点脱离swarm集群:docker-machine ssh myvm2 "docker swarm leave"

如果关闭管理节点:docker-machine ssh myvm1 "docker swarm leave --force"

作者:u011499747 发表于2017/6/30 23:27:41 原文链接
阅读:138 评论:0 查看评论

          Linux Systems Administrator - McKesson - Atlanta, GA   
In-depth knowledge of VMware ESX would be preferred as well as an operational understanding of Dell PowerEdge servers....
From McKesson - Thu, 01 Jun 2017 00:56:45 GMT - View all Atlanta, GA jobs
          Administrador/a de Sistemas Linux - Between Technology - Barcelona   
En BETWEEN seleccionamos y apostamos por el mejor talento dentro del sector tecnológico. Nos involucramos en una gran variedad de proyectos punteros, trabajando con las últimas tecnologías. Actualmente en BETWEEN contamos con un equipo de más de 350 personas. En el área de Desarrollo, abarcamos proyectos web y mobile, trabajamos en ámbitos como BI, IoT, Big Data e I+D. En el área de Operaciones implantamos proyectos de Service Desk, Infraestructuras IT y proyectos Cloud, entre otros. ...
          Técnico de sistemas Junior - everis - Madrid   
En Everis se requiere incorporar a Técnico de Sistemas Junior. Tareas: Administración de sistemas Linux basados en Red Hat 5.6 y superior. Creación de scripts en bash/python para la automatización de operativa. Administración y operación de servidores de aplicación Weblogic 11g. Operación con BBDD Oracle: exports, imports, mantenimiento esquemas de usuarios, ejecución scripts SQL. Gestión de peticiones mediante herramienta de ticketing Redmine. Requisitos: Conocimientos y...
          Operador de Sistemas - AS/400 - everis - Madrid   
En everis buscamos un Operador de Sistemas - AS/400. Requisitos: Experiencia de al menos un 1 años en: Sistemas Operativos Linux, AS/400. Herramientas de Monitorización de Sistemas. Gestores de Backup/Restore. Conocimientos en tareas de automatización mediante Scripting. Herramientas de Gestión de Incidencias/Problemas (Remedy). Herramientas de Planificación (Control - M). Lugar de trabajo: Madrid.
          Build / Deployment Engineer   
IA-Cedar Rapids, Client – DXC / Transamerica Location – Cedar Rapids IA Experience in deployment and configuration using JENKIN Experience in build and deployment automation Linux Shell scripting Experience in SVN / GIT / MAVEN Python Scripting
          Comment on Zoos and Aquariums by Bobbye   
Skype has launched its online-centered consumer beta on the entire world, following launching it broadly within the Usa and U.K. before this month. Skype for Internet also now can handle Chromebook and Linux for instant online messaging connection (no video and voice however, those call for a connect-in installation). The increase of the beta contributes assistance for a longer set of different languages to assist reinforce that worldwide user friendliness
          Junior Linux System Administrator - Computer People - Hook   
Junior Linux System Administrator Leading ISP - Odiham - £25k I am representing a Leading Internet Service Provider near Hook, who is looking for a Junior £20,000 - £25,000 a year
From myfuturerole.com - Thu, 29 Jun 2017 21:10:29 GMT - View all Hook jobs
          Senior Software Developer - 360pi - Ottawa, ON   
Nginx, Tornado, Flask, Ubuntu Linux. 360pi is an award-winning price intelligence engine that helps retailers and manufacturers monitor pricing to increase...
From 360pi - Fri, 23 Jun 2017 10:15:16 GMT - View all Ottawa, ON jobs
          Opera 47 béta (47.0.2631.7)   

A béta csatornára került fejlesztések

  • Könyvjelzők exportálása Netscape HTML formátumba
  • Bezárt fülek számának növelése 10-ről 32-re a fül menüben
  • Sötét faviconok invertálása éjszakai módban
  • A valuta átváltó gombra kattintva az átváltott értéket lehet másolni a kereső buborékban
  • Rengeteg apró javítás a Reborn felületen, amik kontrasztosabbá teszik az felület elemeit és az ikonokat

A fejlesztői csatornán maradtak

  • Mértékegység átváltó

Teljes változáslista

Letöltés: Windows, Mac, Linux

Megosztom Facebookon! Megosztom Twitteren! Megosztom Tumblren!


          Hire a Linux Developer with knowledge about kernel level rootkits functionality by aalesh28   
Embedded C based project on a 32 bit Linux processor. Someone with the knowledge of how the ROP (return oriented programming) is done and shellcodes. Not a major project, have done most of the work, just need help with a little bit of trouble shooting... (Budget: $30 - $250 USD, Jobs: C Programming, Embedded Software, Linux, Software Development, Ubuntu)
          5 Tipps wie Sie Ihre Daten im Internet schützen können   
2017-06-29-1498716489-5851581-download1.jpg


Internet-Sicherheitsexperten können Ihre Kernbotschaft nicht oft genug wiederholen: In der heutigen Zeit wird es immer wichtiger, seine Daten im Internet bestmöglich vor Hackern und Phishing Attacken zu schützen. Denn immer öfter kommt es zu Datenklau, Kreditkartenbetrug oder dem Abgreifen von sensiblen Bankdaten. Außerdem werden manchmal auch private Fotos oder persönliche Nachrichten von unbefugten Personen angesehen und geklaut. Immer wieder ergaunern sogar kriminelle Netzwerke geheime Firmendaten oder andere wichtige Informationen, mit denen sich am Schwarzmarkt Geld verdienen lässt.

Doch obwohl viele Vorsichtsmaßnahmen effektiv und leicht umzusetzen sind, gibt es noch immer eine große Anzahl von Menschen, welche ihre online Profile und online Konten nur unzureichend schützt. Wir haben Ihnen deshalb fünf einfache Sicherheits-Tipps zusammengestellt. Wenn Sie die folgenden Maßnahmen umsetzen, können Sie innerhalb von wenigen Minuten Ihr Risiko ein Opfer einer Phishing-Attacke zu werden, erheblich minimieren. Wir raten Ihnen deshalb sich noch heute ans Werk zu machen und die folgende Liste durchzugehen und abzuarbeiten. Denn Vorsicht ist bekanntlich besser als Nachsicht.

Lesen Sie hier, wie Sie sich am besten vor fiesem Datenklau und Phishing Attacken schützen können:

1) Passwörter

Die Zeiten bei denen man Passwörter wie „1234" oder „Passwort", bedenkenlos verwenden konnte, sind lange vorbei. Genau genommen waren solche besonders leicht zu knackenden Passwörter nie eine gute Idee gewesen. Aber auch ganze Worte sollte man besser nicht verwenden. Denn es gibt Schadprogramme, welche bei der Passworteingabe in Sekundenschnelle das ganze Duden Wörterbuch durchgehen. Außerdem sollte man nicht ein und dasselbe Masterpasswort für alle Accounts verwenden. Insbesondere sensible Daten und Finanztransaktionen müssen mit besonders sicheren Passwörtern gesichert werden. Wir empfehlen deshalb für Online-Banking, Facebook, online Casinos oder Online-Shops immer unterschiedliche Passwörter zu verwenden.

Am sichersten sind Passwörter, welche eine Kombination aus kleinen und großen Buchstaben, Zahlen und Sonderzeichen beinhalten. Ein gutes und sicheres Passwort wäre zum Beispiel: „EiwhZaPzä!1!". Merken kann man sich ein solches Passwort am besten, in dem man es anhand der Anfangsbuchstaben eines Eselsbrücken-Satzes zusammenbaut. In unserem Fall wäre dieser Satz: „Es ist höchste Zeit alle Passwörter zu ändern!1!".


2) Email Sicherheit

Bei Emails von unbekannten oder suspekten Absendern sollte man immer ganz besonders vorsichtig sein. Kettenbriefe, Spaß-Mails und eindeutige Spam Emails sollte man am besten sofort ganz ungelesen löschen. Auf gar keinen Fall darf man auf unbekannte Email Attachments klicken oder diese gar herunterladen. Oft verbirgt sich hinter solchen Email-Anhängen eine unsichtbare Schadsoftware. Kettenbriefe mit „lustigen Power Point Präsentationen" können zur echten Gefahr für Ihre Datensicherheit werden. Vergessen Sie nicht, nach dem löschen auch den Papierkorb zu entleeren.


3) Betriebssystem

Es ist unerlässlich beim Betriebssystem Ihres Rechners immer auf den neuesten Stand zu bleiben. Denn viele Hacker versuchen Sicherheitslücken im System auszunutzen. Sobald die Softwarehersteller solche Betrügereien aber mitbekommen, sperren sie die Webseiten der Betrüger und geben wichtige Sicherheitsupdates heraus. Auch wenn es vielleicht manchmal mühsam ist, ist es auf jeden Fall wichtig seinen Computer in regelmäßigen Abständen auf Betriebssystem-Updates zu überprüfen und gegebenenfalls diese offiziellen Updates auch schnell zu installieren.


4) Ein guter Virenscanner

Ein funktionierendes Anti-Virus-Progamm ist für alle mit Windows Computern ein absolutes Pflichtprogramm. Aber auch für Leute mit einem Mac oder Linux Rechner kann sich ein Virenscanner lohnen. Viele Internetnutzer vergessen außerdem, dass auch die Smartphones und Tablets eine entsprechende Software benötigt. Glücklicherweise gibt es aber inzwischen auch schon einige kostenlose Virenscanner Programme und Apps, welche laut Testberichten fast genauso gut sind, wie die kostenpflichtigen Programme.


5) Unbekannte W-Lan Netze

Immer wieder kann es vorkommen, dass man unterwegs ist und plötzlich ein zuvor unbekanntes W-Lan Netz vom Handy ohne Passwort angeboten bekommt. Bevor Sie auf „Verbinden" klicken, sollten Sie sich aber vergewissern, dass der Anbieter dieses Netzes auch tatsächlich seriös ist. Denn eine offene Funkverbindung kann auch ein Einfallstor für fiese Viren und Datenklau-Schadsoftware sein. Verbinden Sie sich deshalb nur mit vertrauenswürdigen Netzen, und lassen Sie Vorsicht walten!

____

Lesenswert:




Leserumfrage: Wie fandet ihr uns heute?


2017-03-08-1488965563-6721107-iStock482232067.jpg


Ihr habt auch ein spannendes Thema?
Die HuffiPost ist eine Debattenplattform für alle Perspektiven. Wenn ihr die Diskussion zu politischen oder gesellschaftlichen Themen vorantreiben wollt, schickt eure Idee an unser Blog-Team unter blog@huffingtonpost.de.

-- This feed and its contents are the property of The Huffington Post, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.


          Install Phockup Linux Ubuntu Photo Organizer   

Install phockup, Linux photo organizer. Phockup is a media sorting and backup tool to organize photos and videos from your camera in folders by year, month and day. This can also help you to find duplicate photos on Linux Ubuntu. Phockup is a media sorting and backup tool to organize photos and videos from your […]

Install Phockup Linux Ubuntu Photo Organizer originally posted on Source Digit - Latest Technology, Gadgets & Gizmos.


          Basics of Cheap Windows Hosting    
Putting a persons company on the web where a person use web - Affordable Windows hosting group, is a very vital decision to make. A person may not be provided the perfect plan that a persons company actually needs by the business, which is cheap windows hosting a persons company on the web. There are some items to be considered while a person are hunting for a reputable business that can be a host to a persons company in order to help a person on the web front.

Windows or Linux Web Affordable Windows Hosting.

The host states an operating system is needed, that is the operating system an individuals server may use. If a person are using windows operating system, a person actually don't need a windows affordable hosting. In addition, a person don't require it if a person are making a persons pages at the front. They add advantages like .asp and .net-programming abilities if they use windows web server Affordable Windows Hosting and it shows itself to be actually powerful. Listen carefully, taking advice from the one who is completing this job for a person and creating all pages.

Preferred Affordable Windows hosting services:

Try to look for the Affordable Windows Hosting Company that offers a person both services and quality in the method a person need. Some of the affordable products that top the charts are:

Decreased fee or free domain registration - It reflects the professional specifics if the domain is the name of the business. Many hosts will give this style of facility for free and also it will make sure that all updating and needed renewals are completed on time.

Secure server sales on their server - Security is very vital as a person make or accept payments on Internet; also a person send a lot of personal information, which should not be shared, so security is necessary. It should provide SSL in its basic rate.

PHP and mySQL support - Database support and PHP are the tools that are very powerful. A person get it if a person choose UNIX server. It runs programs for a person; make forums, content management and more. It also includes e-commerce.

Design Services at lower prices - Professional web designing is also provided for customers at a good rebate. A person should most assuredly take advantage. A Professional website gives a good impression on a customer so they feel at ease about all this.

Web-mail services and POP Email Boxes - Pop email boxes show themselves to be vital so get that advantage and a person can give all a persons staff and all offices an individual email id. There is a service called a web mail service which offers a person the power to enter a persons email account from any location.

A person can come across a cheap windows hosting source that recommends all the traits that a person want within a persons means. It just takes a modest assessment shopping to happen along with the greatest company Affordable Windows Hosting for a persons professional web site.

Basics of Cheap Windows Hosting
          Dries Buytaert: Acquia's first decade: the founding story   

This week marked Acquia's 10th anniversary. In 2007, Jay Batson and I set out to build a software company based on open source and Drupal that we would come to call Acquia. In honor of our tenth anniversary this week, I wanted to share some of the milestones and lessons that have helped shape Acquia into the company it is today. I hope that my record of Acquia's history not only pays homage to our incredible colleagues, customers and partners that have made this journey worthwhile, but that it offers honest insight into the challenges and rewards of building a company from the ground up.

A Red Hat for Drupal

In 2007, I was attending the University of Ghent working on my PhD dissertation. At the same time, Drupal was gaining momentum; I will never forget when MTV called me seeking support for their new Drupal site. I remember being amazed that a brand like MTV, an institution I had grown up with, had selected Drupal for their website. I was determined to make Drupal successful and helped MTV free of charge.

It became clear that for Drupal to grow, it needed a company focused on helping large organizations like MTV be successful with the software. A "Red Hat for Drupal", as it were. I also noticed that other open source projects, such as Linux had benefitted from well-capitalized backers like Red Hat and IBM. While I knew I wanted to start such a company, I had not yet figured out how. I wanted to complete my PhD first before pursuing business. Due to the limited time and resources afforded to a graduate student, Drupal remained a hobby.

Little did I know that at the same time, over 3,000 miles away, Jay Batson was skimming through a WWII Navajo Code Talker Dictionary. Jay was stationed as an Entrepreneur in Residence at North Bridge Venture Partners, a venture capital firm based in Boston. Passionate about open source, Jay realized there was an opportunity to build a company that provided customers with the services necessary to scale and succeed with open source software. We were fortunate that Michael Skok, a Venture Partner at North Bridge and Jay's sponsor, was working closely with Jay to evaluate hundreds of open source software projects. In the end, Jay narrowed his efforts to Drupal and Apache Solr.

If you're curious as to how the Navajo Code Talker Dictionary fits into all of this, it's how Jay stumbled upon the name Acquia. Roughly translating as "to spot or locate", Acquia was the closest concept in the dictionary that reinforced the ideals of information and content that are intrinsic to Drupal (it also didn't hurt that the letter A would rank first in alphabetical listings). Finally, the similarity to the world "Aqua" paid homage to the Drupal Drop; this would eventually provide direction for Acquia's logo.

Breakfast in Sunnyvale

In March of 2007, I flew from Belgium to California to attend Yahoo's Open Source CMS Summit, where I also helped host DrupalCon Sunnyvale. It was at DrupalCon Sunnyvale where Jay first introduced himself to me. He explained that he was interested in building a company that could provide enterprise organizations supplementary services and support for a number of open source projects, including Drupal and Apache Solr. Initially, I was hesitant to meet with Jay. I was focused on getting Drupal 5 released, and I wasn't ready to start a company until I finished my PhD. Eventually I agreed to breakfast.

Over a baguette and jelly, I discovered that there was overlap between Jay's ideas and my desire to start a "RedHat for Drupal". While I wasn't convinced that it made sense to bring Apache Solr into the equation, I liked that Jay believed in open source and that he recognized that open source projects were more likely to make a big impact when they were supported by companies that had strong commercial backing.

We spent the next few months talking about a vision for the business, eliminated Apache Solr from the plan, talked about how we could elevate the Drupal community, and how we would make money. In many ways, finding a business partner is like dating. You have to get to know each other, build trust, and see if there is a match; it's a process that doesn't happen overnight.

On June 25th, 2007, Jay filed the paperwork to incorporate Acquia and officially register the company name. We had no prospective customers, no employees, and no formal product to sell. In the summer of 2007, we received a convertible note from North Bridge. This initial seed investment gave us the capital to create a business plan, travel to pitch to other investors, and hire our first employees. Since meeting Jay in Sunnyvale, I had gotten to know Michael Skok who also became an influential mentor for me.

Wired interview
Jay and me on one of our early fundraising trips to San Francisco.

Throughout this period, I remained hesitant about committing to Acquia as I was devoted to completing my PhD. Eventually, Jay and Michael convinced me to get on board while finishing my PhD, rather than doing things sequentially.

Acquia, my Drupal startup

Soon thereafter, Acquia received a Series A term sheet from North Bridge, with Michael Skok leading the investment. We also selected Sigma Partners and Tim O'Reilly's OATV from all of the interested funds as co-investors with North Bridge; Tim had become both a friend and an advisor to me.

In many ways we were an unusual startup. Acquia itself didn't have a product to sell when we received our Series A funding. We knew our product would likely be support for Drupal, and evolve into an Acquia-equivalent of the RedHat Network. However, neither of those things existed, and we were raising money purely on a PowerPoint deck. North Bridge, Sigma and OATV mostly invested in Jay and I, and the belief that Drupal could become a billion dollar company that would disrupt the web content management market. I'm incredibly thankful for Jay, North Bridge, Sigma and OATV for making a huge bet on me.

Receiving our Series A funding was an incredible vote of confidence in Drupal, but it was also a milestone with lots of mixed emotions. We had raised $7 million, which is not a trivial amount. While I was excited, it was also a big step into the unknown. I was convinced that Acquia would be good for Drupal and open source, but I also understood that this would have a transformative impact on my life. In the end, I felt comfortable making the jump because I found strong mentors to help translate my vision for Drupal into a business plan; Jay and Michael's tenure as entrepreneurs and business builders complimented my technical strength and enabled me to fine-tune my own business building skills.

In November 2017, we officially announced Acquia to the world. We weren't ready but a reporter had caught wind of our stealth startup, and forced us to unveil Acquia's existence to the Drupal community with only 24 hours notice. We scrambled and worked through the night on a blog post. Reactions were mixed, but generally very supportive. I shared in that first post my hopes that Acquia would accomplish two things: (i) form a company that supported me in providing leadership to the Drupal community and achieving my vision for Drupal and (ii) establish a company that would be to Drupal what Ubuntu or RedHat were to Linux.

Acquia com march
An early version of Acquia.com, with our original logo and tagline. March 2008.

The importance of enduring values

It was at an offsite in late 2007 where we determined our corporate values. I'm proud to say that we've held true to those values that were scribbled onto our whiteboard 10 years ago. The leading tenant of our mission was to build a company that would "empower everyone to rapidly assemble killer websites".

Acquia vision

In January 2008, we had six people on staff: Gábor Hojtsy (Principal Acquia engineer, Drupal 6 branch maintainer), Kieran Lal (Acquia product manager, key Drupal contributor), Barry Jaspan (Principal Acquia engineer, Drupal core developer) and Jeff Whatcott (Vice President of Marketing). Because I was still living in Belgium at the time, many of our meetings took place screen-to-screen:

Typical work day

Opening our doors for business

We spent a majority of the first year building our first products. Finally, in September of 2008, we officially opened our doors for business. We publicly announced commercial availability of the Acquia Drupal distribution and the Acquia Network. The Acquia Network would offer subscription-based access to commercial support for all of the modules in Acquia Drupal, our free distribution of Drupal. This first product launched closely mirrored the Red Hat business model by prioritizing enterprise support.

We quickly learned that in order to truly embrace Drupal, customers would need support for far more than just Acquia Drupal. In the first week of January 2009, we relaunched our support offering and announced that we would support all things related to Drupal 6, including all modules and themes available on drupal.org as well as custom code.

This was our first major turning point; supporting "everything Drupal" was a big shift at the time. Selling support for Acquia Drupal exclusively was not appealing to customers, however, we were unsure that we could financially sustain support for every Drupal module. As a startup, you have to be open to modifying and revising your plans, and to failing fast. It was a scary transition, but we knew it was the right thing to do.

Building a new business model for open source

Exiting 2008, we had launched Acquia Drupal, the Acquia Network, and had committed to supporting all things Drupal. While we had generated a respectable pipeline for Acquia Network subscriptions, we were not addressing Drupal's biggest adoption challenges; usability and scalability.

In October of 2008, our team gathered for a strategic offsite. Tom Erickson, who was on our board of directors, facilitated the offsite. Red Hat's operational model, which primarily offered support, had laid the foundation for how companies could monetize open source, but were convinced that the emergence of the cloud gave us a bigger opportunity and helped us address Drupal's adoption challenges. Coming out of that seminal offsite we formalized the ambitious decision to build Drupal Gardens and Acquia Hosting. Here's why these two products were so important:

Solving for scalability: In 2008, scaling Drupal was a challenge for many organizations. Drupal scaled well, but the infrastructure companies required to make Drupal scale well was expensive and hard to find. We determined that the best way to help enterprise companies scale was by shifting the paradigm for web hosting from traditional rack models to the then emerging promise of the "cloud".

Solving for usability: In 2008, CMSs like Wordpress and Ning made it really easy for people to start blogging or to set up a social network. At the time, Drupal didn't encourage this same level of adoption for non-technical audiences. Drupal Gardens was created to offer an easy on-ramp for people to experience the power of Drupal, without worrying about installation, hosting, and upgrading. It was one of the first times we developed an operational model that would offer "Drupal-as-a-service".

Acquia roadmap

Fast forward to today, and Acquia Hosting evolved into Acquia Cloud. Drupal Gardens evolved into Acquia Cloud Site Factory. In 2008, this product roadmap to move Drupal into the cloud was a bold move. Today, the Cloud is the starting point for any modern digital architecture. By adopting the Cloud into our product offering, I believe Acquia helped establish a new business model to commercialize open source. Today, I can't think of many open source companies that don't have a cloud offering.

Tom Erickson takes a chance on Acquia

Tom joined Acquia as an advisor and a member of our Board of Directors when Acquia was founded. Since the first time I met Tom, I always wanted him to be an integral part of Acquia. It took some convincing, but Tom eventually agreed to join us full time as our CEO in 2009. Jay Batson, Acquia's founding CEO, continued on as the Vice President at Acquia responsible for incubating new products and partnerships.

Moving from Europe to the United States

In 2010, after spending my entire life in Antwerp, I decided to move to Boston. The move would allow me to be closer to the team. A majority of the company was in Massachusetts, and at the pace we were growing, it was getting harder to help execute our vision all the way from Belgium. I was also hoping to cut down on travel time; in 2009 flew 100,000 miles in just one year (little did I know that come 2016, I'd be flying 250,00 miles!).

This is a challenge that many entrepreneurs face when they commit to starting their own company. Initially, I was only planning on staying on the East Coast for two years. Moving 3,500 miles away from your home town, most of your relatives, and many of your best friends is not an easy choice. However, it was important to increase our chances of success, and relocating to Boston felt essential. My experience of moving to the US had a big impact on my life.

Building the universal platform for the world's greatest digital experiences

Entering 2010, I remember feeling that Acquia was really 3 startups in one; our support business (Acquia Network, which was very similar to Red Hat's business model), our managed cloud hosting business (Acquia Hosting) and Drupal Gardens (a WordPress.com based on Drupal). Welcoming Tom as our CEO would allow us to best execute on this offering, and moving to Boston enabled me to partner with Tom directly. It was during this transformational time that I think we truly transitioned out of our "founding period" and began to emulate the company I know today.

The decisions we made early in the company's life, have proven to be correct. The world has embraced open source and cloud without reservation, and our long-term commitment to this disruptive combination has put us at the right place at the right time. Acquia has grown into a company with over 800 employees around the world; in total, we have 14 offices around the globe, including our headquarters in Boston. We also support an incredible roster of customers, including 16 of the Fortune 100 companies. Our work continues to be endorsed by industry analysts, as we have emerged as a true leader in our market. Over the past ten years I've had the privilege of watching Acquia grow from a small startup to a company that has crossed the chasm.

With a decade behind us, and many lessons learned, we are on the cusp of yet another big shift that is as important as the decision we made to launch Acquia Field and Gardens in 2008. In 2016, I led the project to update Acquia's mission to "build the universal platform for the world's greatest digital experiences". This means expanding our focus, and becoming the leader in building digital customer experiences. Just like I openly shared our roadmap and strategy in 2009, I plan to share our next 10 year plan in the near future. It's time for Acquia to lay down the ambitious foundation that will enable us to be at the forefront of innovation and digital experience in 2027.

A big thank you

Of course, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for all your hard work. After 10 years, I continue to love the work I do at Acquia each day — and that is because of you.


          C++ Developer - Distributed Computing - Morgan Stanley - Montréal, QC   
Comfortable programming in a Linux environment, familiar with ksh and bash. Morgan Stanley is a global financial services firm and a market leader in investment...
From Morgan Stanley - Wed, 28 Jun 2017 00:14:01 GMT - View all Montréal, QC jobs
          L3 Linux Operations - Morgan Stanley - Montréal, QC   
Must be able to read, understand and write intermediate to complex scripts using KSH, Bash, Perl, Python etc. Job....
From Morgan Stanley - Tue, 20 Jun 2017 18:06:31 GMT - View all Montréal, QC jobs
          R-Studio 8.3 + Лицензионный ключ   
R-Studio + Ключ


R-Studio - программа для восстановления данных. Поддерживаются файловые системы обеспечение для восстановления данных в случае их удаления, удаления или повреждения разделов диска, а также в случаях, когда диск был отформатирован или, например, произошла вирусная атака. Поддерживаются файловые системы FAT12/16/32, NTFS, NTFS5, Ext2FS (Linux), HFS/HFS+ (Macintosh), UFS1/UFS2 (FreeBSD/OpenBSD/NetBSD/Solaris), HFS и HFS+. Восстановить данные с помощью R-Studio можно не только на локальном компьютере, так и на жестких дисках других компьютеров локальной сети.
          File: Ireland vs Canada Live Stream Rugby World cup 2015   
Best Online TV Stream for Rugby World cup 2015. Today’s Watch a powerful and brilliant Game between Ireland vs Canada Live Stream of Rugby World cup League event. But you have no enough time to go to Stadium. Don’t Worry. You can easily watch the exclusive major game Ireland vs Canada. Live Streaming Online TV Here are Rugby World cup 2015 Live HD Stream Coverage on Windows, PC, Laptop, iPhone, Android, i Pad or Any Device & Mac, i OS, Linux or any operating system.
rugbyworldcul2015livestream Ireland vs Canada Live
Match INFO :
Rugby World Cup 2015
Competitor: Ireland vs Canada
Match Date : Saturday 19th September 2015
 Match Time : 14.30 (ET)
TV Coverage : Live/Repeat
button red2 Ireland vs Canada Live
So, take a opportunity and watch the Soccer Super exciting Game Ireland vs Canada Live Stream. We ensure your satisfaction and also assure that our online streaming TV service is high definition and high quality with smooth visual Sound and crystal clear Video. So, Watch Ireland vs Canada Soccer Live TV Stream. We promoted HD TV service only for you and every National Rugby League (Soccer) Lovers.
Get instant access the best Stream for Ireland vs Canada Soccer Rugby. Watching Soccer Rugby is presently, extremely and straightforward you only got to be compelled to would love a computer with a well net affiliation. Therefore you’ll watch live streaming on-line of this match. So, Soccer fans Don’t miss the excitation. Watch Ireland vs Canada Soccer 2015 Live Stream TV on Online telecast.
Enjoy Rugby Ireland vs Canada TV Stream. Ireland vs Canada Online Live TV. Rugby World cup 2015 live streaming online. Ireland vs Canada Rugby World cup 2015 HD video coverage. Rugby Live USA Rugby. Ireland vs Canada Rugby Season broadcasting. England vs Fiji HD Stream. Ireland vs Canada Live Stream. Ireland vs Canada for Soccer TV channel. Ireland vs Canada Live.
Ireland vs Canada en vivo online, England vs Fiji online live Android, Ireland vs Canada tv listen live, England vs Fiji live justin,Ireland vs Canada live Free,Ireland vs Canada online Streaming,how To watch Ireland vs Canada live Stream,Ireland vs Canada live Telecast,watch Ireland vs Canada live tv,Sports Ireland vs Canada online live,Ireland vs Canada free, Watch Ireland vs Canada Tv Chenel,

          File: Ireland vs Canada Live Stream of Rugby World cup League event   
Best Online TV Stream for Rugby World cup 2015. Today’s Watch a powerful and brilliant Game between Ireland vs Canada Live Stream of Rugby World cup League event. But you have no enough time to go to Stadium. Don’t Worry. You can easily watch the exclusive major game Ireland vs Canada. Live Streaming Online TV Here are Rugby World cup 2015 Live HD Stream Coverage on Windows, PC, Laptop, iPhone, Android, i Pad or Any Device & Mac, i OS, Linux or any operating system.
rugbyworldcul2015livestream Ireland vs Canada Live
Match INFO :
Rugby World Cup 2015
Competitor: Ireland vs Canada
Match Date : Saturday 19th September 2015
 Match Time : 14.30 (ET)
TV Coverage : Live/Repeat
button red2 Ireland vs Canada Live
So, take a opportunity and watch the Soccer Super exciting Game Ireland vs Canada Live Stream. We ensure your satisfaction and also assure that our online streaming TV service is high definition and high quality with smooth visual Sound and crystal clear Video. So, Watch Ireland vs Canada Soccer Live TV Stream. We promoted HD TV service only for you and every National Rugby League (Soccer) Lovers.
Get instant access the best Stream for Ireland vs Canada Soccer Rugby. Watching Soccer Rugby is presently, extremely and straightforward you only got to be compelled to would love a computer with a well net affiliation. Therefore you’ll watch live streaming on-line of this match. So, Soccer fans Don’t miss the excitation. Watch Ireland vs Canada Soccer 2015 Live Stream TV on Online telecast.
Enjoy Rugby Ireland vs Canada TV Stream. Ireland vs Canada Online Live TV. Rugby World cup 2015 live streaming online. Ireland vs Canada Rugby World cup 2015 HD video coverage. Rugby Live USA Rugby. Ireland vs Canada Rugby Season broadcasting. England vs Fiji HD Stream. Ireland vs Canada Live Stream. Ireland vs Canada for Soccer TV channel. Ireland vs Canada Live.
Ireland vs Canada en vivo online, England vs Fiji online live Android, Ireland vs Canada tv listen live, England vs Fiji live justin,Ireland vs Canada live Free,Ireland vs Canada online Streaming,how To watch Ireland vs Canada live Stream,Ireland vs Canada live Telecast,watch Ireland vs Canada live tv,Sports Ireland vs Canada online live,Ireland vs Canada free, Watch Ireland vs Canada Tv Chenel,

          File: rugby 2015 world cup~ Ireland vs Canada Live Stream   
Best Online TV Stream for Rugby World cup 2015. Today’s Watch a powerful and brilliant Game between Ireland vs Canada Live Stream of Rugby World cup League event. But you have no enough time to go to Stadium. Don’t Worry. You can easily watch the exclusive major game Ireland vs Canada. Live Streaming Online TV Here are Rugby World cup 2015 Live HD Stream Coverage on Windows, PC, Laptop, iPhone, Android, i Pad or Any Device & Mac, i OS, Linux or any operating system.
rugbyworldcul2015livestream Ireland vs Canada Live
Match INFO :
Rugby World Cup 2015
Competitor: Ireland vs Canada
Match Date : Saturday 19th September 2015
 Match Time : 14.30 (ET)
TV Coverage : Live/Repeat
button red2 Ireland vs Canada Live
So, take a opportunity and watch the Soccer Super exciting Game Ireland vs Canada Live Stream. We ensure your satisfaction and also assure that our online streaming TV service is high definition and high quality with smooth visual Sound and crystal clear Video. So, Watch Ireland vs Canada Soccer Live TV Stream. We promoted HD TV service only for you and every National Rugby League (Soccer) Lovers.
Get instant access the best Stream for Ireland vs Canada Soccer Rugby. Watching Soccer Rugby is presently, extremely and straightforward you only got to be compelled to would love a computer with a well net affiliation. Therefore you’ll watch live streaming on-line of this match. So, Soccer fans Don’t miss the excitation. Watch Ireland vs Canada Soccer 2015 Live Stream TV on Online telecast.
Enjoy Rugby Ireland vs Canada TV Stream. Ireland vs Canada Online Live TV. Rugby World cup 2015 live streaming online. Ireland vs Canada Rugby World cup 2015 HD video coverage. Rugby Live USA Rugby. Ireland vs Canada Rugby Season broadcasting. England vs Fiji HD Stream. Ireland vs Canada Live Stream. Ireland vs Canada for Soccer TV channel. Ireland vs Canada Live.
Ireland vs Canada en vivo online, England vs Fiji online live Android, Ireland vs Canada tv listen live, England vs Fiji live justin,Ireland vs Canada live Free,Ireland vs Canada online Streaming,how To watch Ireland vs Canada live Stream,Ireland vs Canada live Telecast,watch Ireland vs Canada live tv,Sports Ireland vs Canada online live,Ireland vs Canada free, Watch Ireland vs Canada Tv Chenel,

          File: canada v ireland    
Best Online TV Stream for Rugby World cup 2015. Today’s Watch a powerful and brilliant Game between Ireland vs Canada Live Stream of Rugby World cup League event. But you have no enough time to go to Stadium. Don’t Worry. You can easily watch the exclusive major game Ireland vs Canada. Live Streaming Online TV Here are Rugby World cup 2015 Live HD Stream Coverage on Windows, PC, Laptop, iPhone, Android, i Pad or Any Device & Mac, i OS, Linux or any operating system.
rugbyworldcul2015livestream Ireland vs Canada Live
Match INFO :
Rugby World Cup 2015
Competitor: Ireland vs Canada
Match Date : Saturday 19th September 2015
 Match Time : 14.30 (ET)
TV Coverage : Live/Repeat
button red2 Ireland vs Canada Live
So, take a opportunity and watch the Soccer Super exciting Game Ireland vs Canada Live Stream. We ensure your satisfaction and also assure that our online streaming TV service is high definition and high quality with smooth visual Sound and crystal clear Video. So, Watch Ireland vs Canada Soccer Live TV Stream. We promoted HD TV service only for you and every National Rugby League (Soccer) Lovers.
Get instant access the best Stream for Ireland vs Canada Soccer Rugby. Watching Soccer Rugby is presently, extremely and straightforward you only got to be compelled to would love a computer with a well net affiliation. Therefore you’ll watch live streaming on-line of this match. So, Soccer fans Don’t miss the excitation. Watch Ireland vs Canada Soccer 2015 Live Stream TV on Online telecast.
Enjoy Rugby Ireland vs Canada TV Stream. Ireland vs Canada Online Live TV. Rugby World cup 2015 live streaming online. Ireland vs Canada Rugby World cup 2015 HD video coverage. Rugby Live USA Rugby. Ireland vs Canada Rugby Season broadcasting. England vs Fiji HD Stream. Ireland vs Canada Live Stream. Ireland vs Canada for Soccer TV channel. Ireland vs Canada Live.
Ireland vs Canada en vivo online, England vs Fiji online live Android, Ireland vs Canada tv listen live, England vs Fiji live justin,Ireland vs Canada live Free,Ireland vs Canada online Streaming,how To watch Ireland vs Canada live Stream,Ireland vs Canada live Telecast,watch Ireland vs Canada live tv,Sports Ireland vs Canada online live,Ireland vs Canada free, Watch Ireland vs Canada Tv Chenel,

          Please Help friends   

can u explin....what are the files need to be changed..can u explain here...i did as user guide ...can u explain
5 times i contacted go daddy... diffrent openion..1. take virtual hosting..because it doesnot support. 2 take linux hosting 3. opensource problm 4. conatact invoice plane

whom shall i contact.. i like inovice plane and trust..please help.

i am new to this..please demonostrate..what are the steps to follow..to upload to godaddy windows plesk hosting...

i am runing with no issue in xampp server!!


          Bricsys BricsCad Platinium v17.2.09.1 MACOSX   
Bricsys BricsCad Platinium v17.2.09.1 | MACOSX | 221 Mb A powerful CAD platform, with features familiar to you from native .dwg applications. BricsCAD® unifies advanced 2D design with the intelligence of 3D direct modeling. For Windows, Linux, and Mac.
          Reactie op hoe synchroniseer ik mijn instellingen in windows 10? door D.m.   
Hoi, je kan Linux gewoon naast windows installeren, moet je bij opstarten ff kiezen welke systeem je wilt gebruiken!, grt D.m.
          ArchLabs Review: A Quick Look At The Rising Arch Based Linux Distribution   
ArchLabs is a new and yet promising Linux distribution that is getting some serious attention from Arch Linux users.
          【 top 】コマンド――実行中のプロセスをリアルタイムで表示する   

本連載は、Linuxのコマンドについて、基本書式からオプション、具体的な実行例までを紹介していきます。今回は、実行中のプロセスの情報をリアルタイムで表示する「top」コマンドです。




          Comment on The State of Web Application Testing by Allan   
Skype has opened its internet-centered customer beta to the entire world, following establishing it broadly inside the U.S. and You.K. earlier this four weeks. Skype for Web also now works with Chromebook and Linux for instant messaging communication (no video and voice nevertheless, those need a plug-in installment). The expansion of your beta provides support for a longer set of different languages to assist strengthen that global usability
          Comment on Creating Progressive Web Applications using Sencha Ext JS by Clayton   
Skype has opened up its web-centered consumer beta on the entire world, soon after establishing it broadly in the Usa and You.K. before this calendar month. Skype for Internet also now works with Linux and Chromebook for instant messaging interaction (no voice and video yet, these need a connect-in installment). The expansion of the beta adds help for an extended set of languages to assist bolster that international user friendliness
          Comment on Sencha Architect — Team Development in the Real World (Part 1 of 3) by Jared   
Skype has launched its online-structured buyer beta to the world, following launching it generally inside the Usa and U.K. before this four weeks. Skype for Website also now can handle Chromebook and Linux for immediate online messaging conversation (no video and voice yet, individuals need a connect-in installment). The increase of your beta brings help for an extended set of dialects to help you bolster that overseas user friendliness
           FOTO Sera inteligenta - proiectul creat de trei studenti, care va reprezanta Romania la cea mai mare conferinta Linux din Europa   
Un sistem de gestiune automata a unei sere, proiect realizat de trei studenti de la Universitatea Politehnica Bucuresti, respectiv de la Universitatea Bucuresti (Facultatea de Matematica si Informatica) a castigat competitia Linux Embedded Challenge si vor reprezenta Romania la conferinta LinuxCon / Embedded Linux 2017, din aceasta toamana, la Praga. Cum functioneaza aceasta sera si ce alte proiecte s-au mai numarat printre finaliste.
          TransUnion modernizes IT environment   

TransUnion, a leading global risk and information solutions provider, used mainframe computers to support its global business. To improve IT performance and costs, the company launched an initiative, Project Spark, to migrate all of its applications and systems to a Red Hat operating environment.

With the new infrastructure—based on Red Hat Enterprise Linux, Red Hat JBoss Middleware, and other Red Hat products—TransUnion achieved faster time to market, greater competitive advantage, improved employee satisfaction, and significant operating cost savings.


          Минобороны выделило 1,2 млрд на защищенные ПК на Astra Linux   

Минобороны намерено закупить для нужд военных вузов и для своих работников по всей стране тысячи защищенных ПК и ноутбуков с предустановленной ОС Astra Linux за 1,3 млрд руб.


          Linux Kernel 4.8.9 Denial Of Service Vulnerability   
The TCP stack in the Linux kernel before 4.8.10 mishandles skb truncation, which allows local users to cause a denial of service (system crash) via a crafted application that makes sendto system calls, related to net/ipv4/tcp_ipv4.c and net/ipv6/tcp_ipv6.c.
          Citrix Xenserver 6.0.2 Linux Remote Code Execution Vulnerability   
Authenticated read-only administrator can corrupt host database
          Linux Kernel 3.10 device compromise Execute Code Vulnerability   
Linux Kernel is prone to a local code-execution vulnerability.This allows a local attacker to exploit this issue to execute arbitrary code in the context of the user running the affected application. Failed exploit attempts may result in a denial-of-service condition.
          Citrix Xenserver 6.2.0 Linux Remote Code Execution Vulnerability   
Authenticated read-only administrator can cancel tasks of other administrators
          Technical Account Manager (Engineer) - High Availability, Inc. - Audubon, PA   
Technical certifications from NetApp, VMware, Cisco and/or EMC. 5+ years enterprise experience with data center technologies such as Windows, Unix, Linux,...
From High Availability, Inc. - Thu, 11 May 2017 06:12:15 GMT - View all Audubon, PA jobs
          Network Engineer - Globus Medical - Audubon, PA   
Windows, Cisco Systems, UNIX, Linux, ESXi. The Network Engineer position oversees the installation, configuration and maintenance of networked information...
From Globus Medical - Mon, 27 Mar 2017 07:24:05 GMT - View all Audubon, PA jobs
          Первый выпуск сетевой библиотеки HumbleNet, поддерживающей работу в браузере   
Разработчики из сообщества Mozilla представили первый релиз проекта HumbleNet, в рамках которого развивается кроссплатформенная сетевая библиотека, а также необходимые для её работы серверные компоненты (peer-server). Библиотека предоставляет простой C API для создания сетевых приложений, но для обработки сетевых соединений использует протоколы WebRTC и WebSockets, что позволяет применять её не только на традиционных системах, таких как Windows, macOS и Linux, но и в web-браузере с задействованием Asm.js и WebAssembly. Код библиотеки написан на языке С++ (для компиляции в Asm.js и WebAssembly при меняется Emscripten) и поставляется под лицензией BSD.
          Выпуск cистемы управления контейнерной виртуализацией Docker 17.06   
Представлен релиз инструментария для управления изолированными Linux-контейнерами Docker 17.06, предоставляющего высокоуровневый API для манипуляции контейнерами на уровне изоляции отдельных приложений. Docker позволяет, не заботясь о формировании начинки контейнера, запускать произвольные процессы в режиме изоляции и затем переносить и клонировать сформированные для данных процессов контейнеры на другие серверы, беря на себя всю работу по созданию, обслуживанию и сопровождению контейнеров. Инструментарий базируется на применении встроенных в ядро Linux штатных механизмов изоляции на основе пространств имён (namespaces) и групп управления (cgroups). Код Docker написан на языке Go и распространяется под лицензией Apache 2.0.
          Компания System76 анонсировала новый Linux-дистрибутив Pop!_OS   
Компания System76, специализирующаяся на производстве ноутбуков, ПК и серверов, поставляемых с Linux, представила новый дистрибутив Linux Pop!_OS, который будет поставляться на оборудовании System76 вместо ранее предлагаемого дистрибутива Ubuntu. При этом Pop!_OS продолжает формироваться на пакетной базе Ubuntu, но отличается переработанным окружением рабочего стола и иной целевой аудиторией.
          FS#54652: [python-tensorflow-cuda] ModuleNotFoundError upon importing tensorflow   
Package version: python-tensorflow-cuda 1.2.1-1


Steps to reproduce:
```
[omtcyfz@omtcyfz-arch ~]$ python
Python 3.6.1 (default, Mar 27 2017, 00:27:06)
[GCC 6.3.1 20170306] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in
from tensorflow.python import *
File "/usr/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 63, in
from tensorflow.python.framework.framework_lib import *
File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/framework_lib.py", line 100, in
from tensorflow.python.framework.subscribe import subscribe
File "/usr/lib/python3.6/site-packages/tensorflow/python/framework/subscribe.py", line 26, in
from tensorflow.python.ops import variables
File "/usr/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 26, in
from tensorflow.python.ops import control_flow_ops
File "/usr/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 70, in
from tensorflow.python.ops import tensor_array_ops
File "/usr/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 33, in
from tensorflow.python.util import tf_should_use
File "/usr/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 28, in
from backports import weakref # pylint: disable=g-bad-import-order
ModuleNotFoundError: No module named 'backports'
>>> quit()
```

Since 1.2.0 importing tensorflow produces the described error, which indicates `backports` Python package absence. I didn't find this package in Arch package index and it's certainly not in the [python-tensorflow-cuda] package requierments, but it seems like the solution would be simply installing that package along with python-tensorflow-cuda.
          FS#54645: Add Yubikey support for keepassxc   
It exsists an extra patch for keepassxc and would be nice if you can activate it for the community package

https://www.archlinux.org/packages/community/x86_64/keepassxc

Thanks
          Alienware 17 R4 2017 Gaming Laptop Review and more   
Here a roundup of todays reviews and articles: Adata\'s Ultimate SU900 256GB SSD reviewed Alienware 17 R4 2017 Gaming Laptop Review: Powerful And Refined AMD\'s new Ryzen 3 could beat Intel\'s Core i3 processors ECS Z270-Lightsaber Review G.Skill Announces Quad-Channel DDR4-4200 Kit for Intel Skylake-X CPUs Power Consumption & Thermal Testing With The Core i9 7900X On Linux SteelSeries Arctis 5 Review TP-Link TL-PA9020P AV2000 2-Port Gigabit Passthrough Powerline Adapter Kit Review...
          DevOps Engineer - Liunx and Googles Go   
CO-Greenwood Village, Outstanding opportunity to join a national leader in Cable, Media & Television. Our client is seeking a DevOps Engineer with advanced development skills in writing systems-level code to automate configuration and release management in a highly complex environment. Required Skills: . 7+ years' DevOps Engineering / Linux Engineering experience . Continuous Integration Engineer with expert-level Jenk
          Practical Networking for Linux Admins: IPv6 and IPv6 LAN Addressing   
We're cruising now. We know important basics about TCP/IP and IPv6. Today we're learning about private and link-local addressing. Yes, I know, I promised routing. That comes next.
          Open spec, sandwich-style SBC runs Linux on i.MX6UL based COM   
Grinn and RS Components unveiled a Linux-ready “Liteboard” SBC that uses an i.MX6 UL LiteSOM COM, with connectors compatible with Grinn Chiliboard add-ons. UK-based distributor RSA Components is offering a new sandwich-style SBC from Polish embedded firm Grinn. The 60-Pound ($78) Liteboard, which is available with schematics, but no community support site, is designed to […]
          Linux-friendly COM Express duo taps PowerPC based QorIQs   
Artesyn’s rugged line of COMX-T modules debut with COMs using NXP’s quad-core QorIQ T2081 and QorIQ T1042 SoCs, clocked to 1.5GHz and 1.4GHz, respectively. Artesyn Embedded Computing has launched a line of 125 x 95mm COM Express Basic Type 6 COMX-T Series computer-on-modules that run Linux on NXP’s Power Architecture based QorIQ T processors: The […]
          How to Install Gitlab On Debian 9 Stretch Linux   
Gitlab is an awesome free software alternative to Github. It allows teams and individual developers to host and manage their own projects on servers that they control. Debian Stretch provides a stable foundation for Gitlab and can make for an excellent code repository server. Plus, Gitlab's Omnibus Package makes installation super simple.
          Linux on Azure: What are your choices?   
More than a third of virtual machines running on Azure are Linux VMs. Here[he]#039[/he]s what[he]#039[/he]s available.
          Create mini Linux servers using the Odroid C2   
If you're into trying out various distributions or enjoy having various Linux 'servers' to ssh into and play around with, then you should get started with single-board computers. They're cheap, robust and fun.This quick tutorial focuses on the Odroid c2. The features vastly outweigh what's available for the popular Raspberry Pi.
          Ubuntu 17.10 Finishes Its Transition to Python 3.6, Ubuntu 16.10 EOL Coming July   
Canonical today published a new installation of the Ubuntu Foundations Team weekly newsletter to inform the Ubuntu Linux community on the progress made since last week's update.
          Ubuntu Kylin, a Linux Distribution with a Microsoft Windows Experience   
Ubuntu Kylin is an open-source Linux distribution based on Ubuntu since 2013,mainly developed by a Chinese team alongside dozens of Linux developers allover the world. It contains the basic features you would expect from Ubuntu, plusfeatures a desktop environment and applications.
          OutlawCounty: un exploit de la CIA para sistemas Linux   
Si hay que reconocerle algo a la CIA o la NSA a la hora de crear exploits, es su originalidad con los nombres. Una muestra de ello es el último que ha revelado la gente de Wikileaks, en su particular colección de Vault 7: se
          Save-File Puppy-Linux-Slacko   

Ich habe Puppy Linux Slacko 6.3.2 erfolgreich auf USB-Stick installiert. (Auch Lexmark-Drucker funktioniert!)
Die Save-File kann bei der Installation optional auf einem Festplatten-Laufwerk plaziert werden, was die Lese- und Schreibgeschwindigkeit beschleunigen würde.
Beim USB-Booten wird allerdings nur eine auf dem Stick befndliche Save-File erkannt.
Weiß jemand Rat?


          DVD von LINUX Heft läßt sich nicht öffnen   

Moin,

ich bin neu hier - gerade eben habe ich mich angemeldet.

Ich habe folgendes Problem: Möchte auf Linux / Ubuntu / Knoppix einsteigen, aber erst einmal ausprobieren, welches System am besten zu meinem Bedarf passt.


          Welchen Virenscanner verwendet ihr?   

Guten Tag,

welchen Virenscanner neben Brain verwendet ihr auf euren Windows oder Linux PCs?


          Administrador/a de Sistemas Linux - Between Technology - Barcelona   
En BETWEEN seleccionamos y apostamos por el mejor talento dentro del sector tecnológico. Nos involucramos en una gran variedad de proyectos punteros, trabajando con las últimas tecnologías. Actualmente en BETWEEN contamos con un equipo de más de 350 personas. En el área de Desarrollo, abarcamos proyectos web y mobile, trabajamos en ámbitos como BI, IoT, Big Data e I+D. En el área de Operaciones implantamos proyectos de Service Desk, Infraestructuras IT y proyectos Cloud, entre otros. ...
          Técnico de sistemas Junior - everis - Madrid   
En Everis se requiere incorporar a Técnico de Sistemas Junior. Tareas: Administración de sistemas Linux basados en Red Hat 5.6 y superior. Creación de scripts en bash/python para la automatización de operativa. Administración y operación de servidores de aplicación Weblogic 11g. Operación con BBDD Oracle: exports, imports, mantenimiento esquemas de usuarios, ejecución scripts SQL. Gestión de peticiones mediante herramienta de ticketing Redmine. Requisitos: Conocimientos y...
          List of Cross-Platform Alternatives   
My last list of free desktop applications was for Windows only. Since starting to use Mac and Linux, I’m particularly interested in applications that are cross-platform (ie. they work on all  Linux, Mac, Windows platforms). Here is a collection of those applications I previously used on Windows and the alternatives on Linux and Mac. Category […]
          Moving on to Ubuntu and Rekindle this blog   
It had been a long time since I last blogged here. Since this blog started, I had moved from using Windows XP, to Windows Vista, to MacOS X and now to Ubuntu as my desktop operating system. And finally I can declare that my desktop is completely free (well almost). When the desktop is Linux, […]
          Acer aspire one   
Ordenador acer aspire one zg5. 80 gb de disco duro. Linux elementary os 32 bits. Bateria nueva. 1 gb de ram. Cable de conexion a la corriente. 150 euros.
          Fujitsu siemens lifebook   
Fujitsu siemens lifebook . 2 gb de ram, 80 gb de disco duro. Pantalla giratoria. La bateria aguanta mas de hora y media. Usb 2. 0 tarjeta de red, lector tarjeta sd grabadora de dvd, tarjeta wifi, bluetooth. Cable de red electrica. Sistema operativo linux elementary os. Bolsa de transporte. Precio 150 euros
          gDraw - free software for you!   

gDraw Screenshot

I recently wrote a little piece of software – and I thought I should share it before Christmas. It is a 2D drawing program which creates a G-Code file that can be 3D printed. I just used it to make greeting cards and maybe you need a last minute gift and want to do the same.

Or you want to use it to make delicate window decoration, or sophisticated business cards, or you just need to keep your kids busy, or perhaps you have a much better idea for what this code can be used. In that case, drop me a line, because I’m curious to hear what you came up with!

Scroll down to find out how to download & to use gDraw.

2D greeting card printed with a 3D printer

2D greeting card printed with a 3D printer

2D greeting card printed with a 3D printer

Here’s how it works:

Download and installation:

First you need to download gDraw as .zip archive from my server – or you can find the code also on Github. gDraw is written in Processing, which you also have to download. Processing works with Windows, Mac OS X and Linux and you can get it here for free.

How to use:

Unzip the downloaded archive and place the folder “gDraw_V0_15” in Processing’s “Sketches” folder. Then run gCode from Processing.

You can draw in two modes, the free mode and the fixed mode. “Free” is nice for organic drawings, “fixed” is good for geometric drawings, as it creates always straight lines in a metric grid. You can toggle between the two modes either by pushing the button “F” on your keyboard or by hitting the free/fixed button in the menu on the left.

You can zoom in and out of the canvas with the mouse wheel.

Interrupt the line by hitting the space bar. The program displays the path of the printhead, when it just moves but does not print, as a thin line. Because the printhead might still squeeze out small amounts of plastic during those paths and therefore you might want to keep control over where the printhead moves exactly.

You can save drawings by hitting the save button and you can load them again by hitting the load button. As the current drawing will not be deleted when you load a path, you can merge drawings by loading one drawing after another.

Finally, in order to create printable G-Code, you hit the “save G-Code”-button. You have to give your file the extension “.gcode” otherwise your printer might not recognize it as a valid G-Code file.

How it works:

G-Code is the “language” that a 3D printer understands. It is a text file, with a list of coordinates for X, Y and Z axis, and also for E (the extrusion motor). The printer reads this file, moves from coordinate to coordinate and squeezes out plastic as indicated in the value for E.

gDraw is a simple vector drawing program that turns your drawings into G-Code so you can print your drawings as lines of plastic.

Disclaimer:

I have an Ultimaker 2, so I wrote the program in such way that it works with my printer. If you have a different printer, you might have to adapt the G-Code header in order to make it work for you. Have a look in the lines 486-511 of my code. This is where the G-Code header for the Ultimaker 2 is written.


          Lynda – Learning Computer Security Investigation and Response   

DESCRIPCIÓN Lynda – Learning Computer Security Investigation and Response – Aprender los fundamentos de la informática forense y la investigación de crímenes cibernéticos. Descubra cómo recolectar y recuperar evidencia de delitos cibernéticos como el acoso y el robo de identidad en computadoras Mac, Windows y Linux. DATOS TÉCNICOS Learning Computer Security Investigation and Response Peso: […]

The post Lynda – Learning Computer Security Investigation and Response appeared first on Bacterias.


          USN-3342-2: Linux kernel (HWE) vulnerabilities    

Ubuntu Security Notice USN-3342-2

29th June, 2017

linux-hwe vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 16.04 LTS

Summary

Several security issues were fixed in the Linux kernel.

Software description

  • linux-hwe - Linux hardware enablement (HWE) kernel

Details

USN-3342-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.10.
This update provides the corresponding updates for the Linux Hardware
Enablement (HWE) kernel from Ubuntu 16.10 for Ubuntu 16.04 LTS.

USN-3333-1 fixed a vulnerability in the Linux kernel. However, that
fix introduced regressions for some Java applications. This update
addresses the issue. We apologize for the inconvenience.

It was discovered that a use-after-free flaw existed in the filesystem
encryption subsystem in the Linux kernel. A local attacker could use this
to cause a denial of service (system crash). (CVE-2017-7374)

Roee Hay discovered that the parallel port printer driver in the Linux
kernel did not properly bounds check passed arguments. A local attacker
with write access to the kernel command line arguments could use this to
execute arbitrary code. (CVE-2017-1000363)

Ingo Molnar discovered that the VideoCore DRM driver in the Linux kernel
did not return an error after detecting certain overflows. A local attacker
could exploit this issue to cause a denial of service (OOPS).
(CVE-2017-5577)

Li Qiang discovered that an integer overflow vulnerability existed in the
Direct Rendering Manager (DRM) driver for VMWare devices in the Linux
kernel. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2017-7294)

It was discovered that a double-free vulnerability existed in the IPv4
stack of the Linux kernel. An attacker could use this to cause a denial of
service (system crash). (CVE-2017-8890)

Andrey Konovalov discovered an IPv6 out-of-bounds read error in the Linux
kernel's IPv6 stack. A local attacker could cause a denial of service or
potentially other unspecified problems. (CVE-2017-9074)

Andrey Konovalov discovered a flaw in the handling of inheritance in the
Linux kernel's IPv6 stack. A local user could exploit this issue to cause a
denial of service or possibly other unspecified problems. (CVE-2017-9075)

It was discovered that dccp v6 in the Linux kernel mishandled inheritance.
A local attacker could exploit this issue to cause a denial of service or
potentially other unspecified problems. (CVE-2017-9076)

It was discovered that the transmission control protocol (tcp) v6 in the
Linux kernel mishandled inheritance. A local attacker could exploit this
issue to cause a denial of service or potentially other unspecified
problems. (CVE-2017-9077)

It was discovered that the IPv6 stack in the Linux kernel was performing
its over write consistency check after the data was actually overwritten. A
local attacker could exploit this flaw to cause a denial of service (system
crash). (CVE-2017-9242)

Update instructions

The problem can be corrected by updating your system to the following package version:

Ubuntu 16.04 LTS:
linux-image-4.8.0-58-lowlatency 4.8.0-58.63~16.04.1
linux-image-4.8.0-58-generic-lpae 4.8.0-58.63~16.04.1
linux-image-generic-hwe-16.04 4.8.0.58.29
linux-image-lowlatency-hwe-16.04 4.8.0.58.29
linux-image-4.8.0-58-generic 4.8.0-58.63~16.04.1
linux-image-generic-lpae-hwe-16.04 4.8.0.58.29

To update your system, please follow these instructions: https://wiki.ubuntu.com/Security/Upgrades.

After a standard system update you need to reboot your computer to make
all the necessary changes.

ATTENTION: Due to an unavoidable ABI change the kernel updates have
been given a new version number, which requires you to recompile and
reinstall all third party kernel modules you might have installed.
Unless you manually uninstalled the standard kernel metapackages
(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,
linux-powerpc), a standard system upgrade will automatically perform
this as well.

References

CVE-2017-1000363, CVE-2017-5577, CVE-2017-7294, CVE-2017-7374, CVE-2017-8890, CVE-2017-9074, CVE-2017-9075, CVE-2017-9076, CVE-2017-9077, CVE-2017-9242, LP: 1699772, https://www.ubuntu.com/usn/usn-3333-1


          Invertir en Forex con Plus500   
Invertir en Forex con Plus500

De entre todos los mercados financieros que existen en la actualidad, Forex es el más importante debido a la gran cantidad de usuarios que invierten en las divisas. Para poder operar en este mercado, hay que contar con un bróker adecuado que nos sirva como soporte, en este sentido, destaca Plus500.

Se trata de un soporte que ofrece muy buenos servicios financieros y además, cuenta con una amplia experiencia en este sector.

Por qué Forex?

Aunque Forex no es el único mercado financiero en el que se puede invertir, es cierto que se trata del mercado más importante en este sector, ya que en él se dan más de 4 millones de transacciones de forma diaria. La popularidad del mercado de divisas precisamente radica en éso, en la gran cantidad de usuarios que forman parte de este mercado durante todo el año. Y es que además, Forex es un mercado que no descansa nunca, se puede invertir en él todos los días, y está abierto las 24 horas.

En el gran mercado financiero, se encuentran disponibles múltiples divisas. Los valores más importantes de todo el mundo se dan cita en Forex para que podamos llevar a cabo nuestras inversiones. Nos referimos al euro, al dólar o a la libra, que son las monedas más representativas, aunque también tenemos disponibles otros valores como el yen, el franco suizo o la libra esterlina.

Al ser un mercado tan frecuentado, donde se mueve además una gran cantidad de dinero, es importante contar con un broker que nos ofrezca sus servicios para poder invertir en este mercado. Plus500 lo hace y además nos proporciona un gran número de herramientas de trading para realizar nuestras operaciones en Forex, todas ellas de forma gratuita como informes, límites o gráficos.

Plus500: un broker diferente

Plus500 es uno de los brokers más populares que existe actualmente en el mercado, no solo por las facilidades que ofrece con respecto a Forex si no también porque pone a nuestra disposición numerosos servicios financieros. Fue creado en el año 2008 en el Reino Unido y desde entonces, ha ido renovándose hasta colocarse en las primeras posiciones de este sector.

La sede central de Plus500 se encuentra situada en Londres, pero también tiene otras oficinas en países como Israel o Australia. De cualquier modo, Plus500 ofrece cobertura en la mayor parte del mundo, prueba de ello es su exclusivo programa con más de 70.000 afiliados. Se trata de 500 Affiliates, un sistema disponible en 29 idiomas y con acceso a 50 mercados diferentes.

Plus500 - análisis completo en https://cinterfor.net/plus500/ es uno de los pocos brokers que tiene cotización en bolsa, por este motivo, también garantiza los fondos de sus usuarios, hasta un máximo de 20.000 euros por cada uno de ellos. Además, cuenta con el respaldo de organismos reguladores como la Autoridad de Conducta Financiera británica (FCA), la Comisión Nacional del Mercado de Valores de España (CNMV) o la Comisión de Mercados de Valores de Chipre (CySEC).

Qué tipo de cuentas hay?

Para poder hacer uso de todas las herramientas que nos ofrece Plus500, debemos abrirnos una cuenta en el broker. La principal y más utilizada por los usuarios es la Estándar, una cuenta básica que nos ofrece un gran número de herramientas de trading, como son las de gestión de riesgo o las de posicionamiento en el mercado.

Esta cuenta además nos proporciona el acceso directo a múltiples activos y la posibilidad de consultar a expertos financieros todas nuestras dudas. Para hacerse con una cuenta Estándar el depósito mínimo inicial es de 100 euros.

Por otro lado, Plus500 ha creado una nueva cuenta llamada Oro. Ésta es muy similar a la anterior, ya que ofrece las mismas herramientas, la diferencia se encuentra en que aquí, todas las cuotas que tengamos que pagar cuentan con un 5% de descuento.

Además de estas dos cuentas reales, Plus500 tiene una Demo, para poder familiarizarnos con el broker antes de invertir con nuestro dinero. Este cuenta no necesita un depósito inicial y además podremos utilizarla de manera ilimitada.

Las plataformas de trading de Plus500

Un aspecto importante que hay que tener en cuenta de este tipo de brokers, es conocer las plataformas de trading que tienen disponibles. En este sentido, Plus500 nos ofrece dos tipos de soportes. Por un lado nos encontramos con la WebTrader, una plataforma que no necesita ninguna descarga, se trabaja directamente desde la web. La mayor ventaja es que es compatible con la mayoría de sistemas operativos como es el caso de Mac, Windows o Linux.

Pero además, también ofrece herramientas y gráficos avanzados y la posibilidad de poder invertir con Contratos por diferencia (CFD). El soporte incluye una prueba gratuita.

La segunda plataforma es WindowsTrader, un soporte que sí necesita descarga pero el software se encuentra disponible en la página web del broker. En este caso, la plataforma es solo compatible con Windows, pero también ofrece múltiples herramientas de trading muy completas.

Una última opción que nos proporciona Plus500 es su aplicación para dispositivos móviles, desde donde podemos hacer uso de todas las herramientas disponibles en las otras versiones. La app es gratis y accesible, además de que es compatible con Android, iPhone, Windows Phone, iPad y Apple Watch.

Además de Forex...

A parte de Forex, Plus500 nos brinda la posibilidad de poder invertir en otros mercados financieros. Es el caso del de las acciones, donde podemos apostar por empresas de muchas partes del mundo como Estados Unidos, Alemania, Reino Unido o Francia.

Por otra parte, también están disponibles múltiples índices bursátiles, Dax, IBEX35 o Nasdaq suelen ser de los más populares. Y finalmente, aparecen las materias primas, tales como el oro, la plata, el paladio, el gas natural o el petróleo.

Etiquetas: foreign exchange market, divisa, divisas, mercado de divisas, bolsa de valores, bróker, comercio, mercado, índices bursátiles, dax, ibex35, nasdaq


          Linux Kernel ldso_dynamic Stack Clash Privilege Escalation   
Linux kernel ldso_dynamic stack clash privilege escalation exploit. This affects Debian 9/10, Ubuntu 14.04.5/16.04.2/17.04, and Fedora 23/24/25.
          Linux Kernel ldso_hwcap_64 Stack Clash Privilege Escalation   
Linux kernel ldso_hwcap_64 stack clash privilege escalation exploit. This affects Debian 7.7/8.5/9.0, Ubuntu 14.04.2/16.04.2/17.04, Fedora 22/25, and CentOS 7.3.1611.
          Linux Kernel offset2lib Stack Clash   
Linux kernel offset2lib stack clash exploit.
          Ubuntu Security Notice USN-3342-2   
Ubuntu Security Notice 3342-2 - USN-3342-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.10. This update provides the corresponding updates for the Linux Hardware Enablement kernel from Ubuntu 16.10 for Ubuntu 16.04 LTS. USN-3333-1 fixed a vulnerability in the Linux kernel. However, that fix introduced regressions for some Java applications. This update addresses the issue. It was discovered that a use-after-free flaw existed in the filesystem encryption subsystem in the Linux kernel. A local attacker could use this to cause a denial of service. Various other issues were also addressed.
          Red Hat Security Advisory 2017-1664-01   
Red Hat Security Advisory 2017-1664-01 - In accordance with the Red Hat Enterprise Linux Errata Support Policy, Advanced Mission Critical for Red Hat Enterprise Linux 6.2 will be retired as of December 31, 2017, and active support will no longer be provided. Accordingly, Red Hat will no longer provide updated packages, including Critical Impact security patches or Urgent Priority bug fixes, for Red Hat Enterprise Linux 6.2 AMC after December 31, 2017.
          Linux Kernel ldso_hwcap Stack Clash Privilege Escalation   
Linux kernel ldso_hwcap stack clash privilege escalation exploit. This affects Debian 7/8/9/10, Fedora 23/24/25, and CentOS 5.3/5.11/6.0/6.8/7.2.1511.
          Ubuntu Linux 17.10 'Artful Aardvark' Alpha 1 now available for download   
There has been tons of Ubuntu news lately, with the death of Unity continuing to be felt in the Linux community. Just yesterday, a company that is one of Ubuntu's biggest proponents -- System76 -- announced it was creating its own operating system using that distribution as a base. While some might see that as bad news for Canonical's distro, I do not -- some of System76's contributions should find their way into Ubuntu upstream. Today, we get some more positive news, as Ubuntu Linux 17.10 'Artful Aardvark' has officially achieved Alpha status. While details about changes and such are virtually… [Continue Reading]
          System76 unveils its own Ubuntu-based Linux distribution called 'Pop!_OS'   
When Canonical announced the death of the Unity desktop environment, it sent shock waves through the Linux community. After all, Ubuntu is probably the most popular Linux-based desktop operating system and switching to GNOME was changing its trajectory. With Unity, Canonical was promising Ubuntu would be an OS that could scale from smartphone to desktop with a focus on convergence, and then suddenly, it wasn't. Overnight, Ubuntu became just another desktop distro -- not necessarily a bad thing. While this hit many people hard, computer-seller System76 was probably impacted the most. The company only sells machines running Ubuntu, meaning its… [Continue Reading]
          Sr. Software Engineer - ARCOS LLC - Columbus, OH   
Oracle, PostgreSQL, C, C++, Java, J2EE, JBoss, HTML, JSP, JavaScript, Web services, SOAP, XML, ASP, JSP, PHP, MySQL, Linux, XSLT, AJAX, J2ME, J2SE, Apache,...
From ARCOS LLC - Tue, 13 Jun 2017 17:31:59 GMT - View all Columbus, OH jobs
          McAfeeVSEForLinux installation   
Hi! I trying to install McAfeeVSEForLinux-2.0.2.29099 on my Fedora 25 and I am getting the next error: Enter accept or reject: accept ISecGRt is not supported on this distribution - Fedora release 25 (Twenty Five) error: %prein(ISecGRt-10.0.0-1517.x86_64) scriptlet failed, exit status 255 error: ISecGRt-10.0.0-1517.x86_64: install failed <13>Jun 30 20:24:35 mchacon: vsel-installer: Failed to install ISecGRt rpm package I know about the error the message, but I am want to know if could do some hack to make it work on my Fedora. Thanks!
          McAfeeVSEForLinux-2.0.2.29099 installation   
Hi! I trying to install McAfeeVSEForLinux-2.0.2.29099 on my Fedora 25 and I am getting the next error: Enter accept or reject: accept ISecGRt is not supported on this distribution - Fedora release 25 (Twenty Five) error: %prein(ISecGRt-10.0.0-1517.x86_64) scriptlet failed, exit status 255 error: ISecGRt-10.0.0-1517.x86_64: install failed <13>Jun 30 20:24:35 mchacon: vsel-installer: Failed to install ISecGRt rpm package I know about the error the message, but I am want to know if could do some hack to make it work on my Fedora. Thanks!
          Comment on {Another} DIY Stained Wood Map by Meri   
Skype has opened its internet-structured buyer beta towards the entire world, following establishing it largely in the Usa and U.K. before this month. Skype for Online also now facilitates Chromebook and Linux for instant messaging interaction (no voice and video yet, individuals call for a plug-in installation). The increase of your beta provides help for an extended list of spoken languages to help you strengthen that overseas user friendliness
          Comment on 5 Must Know Tax Tips by Glen   
Skype has established its online-centered client beta for the entire world, right after establishing it largely from the United states and You.K. before this calendar month. Skype for Online also now works with Linux and Chromebook for immediate text messaging conversation (no video and voice however, all those demand a connect-in set up). The increase of the beta contributes assist for a longer list of dialects to help strengthen that overseas usability
          Design flaws of the Linux page cache   
none
          Move my website to my dedicated server by famin69   
I am looking to set up cPanel and move my website from another server. (Budget: $8 - $15 USD, Jobs: Linux, MySQL, PHP, Web Hosting, Website Management)
          Comment on HOW TO: Make the Most of a Visit to Machu Picchu by Ralph   
Skype has opened its web-based consumer beta to the entire world, after introducing it generally from the Usa and U.K. before this month. Skype for Online also now facilitates Linux and Chromebook for immediate online messaging communication (no voice and video yet, individuals need a plug-in set up). The expansion of the beta contributes support for an extended selection of spoken languages to aid reinforce that global usability
          Linux Systemd Bug Could Have Led to Crash, Code Execution   
LinuxSecurity.com: Developers with Canonical pushed out a handful of patches for the Linux-based operating system Ubuntu this week, including one that resolves a bug that could have let an attacker cause a denial of service or execute arbitrary code with a TCP payload.
          Shadow Brokers hike prices for stolen NSA exploits, threaten to out ex-Uncle Sam hacker   
LinuxSecurity.com: The Shadow Brokers is once again trying to sell yet more stolen NSA cyber-weapons, raising the asking price in the process. And the gang has threatened to out one of the US spy agency's ex-operatives that it claims hacked Chinese targets.
          How to keep Debian Linux patched with latest security updates automatically   
LinuxSecurity.com: How do I keep my server/cloud computer powered by Debian Linux 9.x or 8.x current with the latest security updates automatically? Is there is a tool to update security patched automatically?
          Linux: A Hacker's Preference   
LinuxSecurity.com: Hackers have tools that they use to carry out various types of operations. But among them all, the most crucial is the Linux Operating System.
          A critical flaw allows hacking Linux machines with just a malicious DNS Response    
LinuxSecurity.com: Chris Coulson, Ubuntu developer at Canonical, has found a critical vulnerability Linux that can be exploited to remotely hack machines running the popular OS. The flaw, tracked as CVE-2017-9445, resides in the Systemd init system and service manager for Linux operating systems.
          Fedora 24: openvpn Security Update   
LinuxSecurity.com: Updates to the latest upstream OpenVPN 2.3.17, containing security updates for CVE-2017-7508, CVE-2017-7520 and CVE-2017-7521.
          SuSE: 2017:1745-1: important: unrar   
LinuxSecurity.com: An update that fixes one vulnerability is now available. An update that fixes one vulnerability is now available. An update that fixes one vulnerability is now available.
          SuSE: 2017:1744-1: important: python-pycrypto   
LinuxSecurity.com: An update that fixes one vulnerability is now available. An update that fixes one vulnerability is now available. An update that fixes one vulnerability is now available.
          SuSE: 2017:1742-1: important: xen   
LinuxSecurity.com: An update that solves two vulnerabilities and has 9 fixes An update that solves two vulnerabilities and has 9 fixes An update that solves two vulnerabilities and has 9 fixes is now available. is now available.
          Slackware: 2017-180-03: httpd Security Update   
LinuxSecurity.com: New httpd packages are available for Slackware 13.0, 13.1, 13.37, 14.0, 14.1, 14.2, and -current to fix security issues.
          Slackware: 2017-180-01: Slackware 14.1 kernel Security Update   
LinuxSecurity.com: New kernel packages are available for Slackware 14.1 to fix security issues.
          Linux Training Institute in Delhi | Raise your Learning Skills at NetLabs ITS (Delhi)   
Have a great job opportunity in IT field, joining online classes now at Netlabs ITS, the leading Linux Training Institute in Delhi. Call now on 91-9278208308. [URL]http://www.netlabsits.com/Courses/Linux-Certification-Training.aspx[/URL]
          Episode 13: PHP Internals, Service-orientated Architecture and Language Wars   

Some episodes of this show are brought to you after more beers than others. This is one of those episodes where its more, so if you don’t like swearing and listening to a slightly confused Bristolian ramble about points he occasionally forgets then you might want to skip this one.

Regardless Ben, Zack K. and Phil discuss the difference between PHP’s organisational structure and lack of BDFL with that of Rails, or Linux. We then discuss service-orientated architecture a little and move onto how you should not box yourself into a single programming language - on your CV or in general as a programmer.


          Technology Auditor - Verizon - Basking Ridge, NJ   
And/or (d) ERP security and control reviews (Oracle, SAP, PeopleSoft). IT security administration or system administration of UNIX, Linux or Windows platforms...
From Verizon - Thu, 29 Jun 2017 10:58:49 GMT - View all Basking Ridge, NJ jobs
          Firebird 2.5.2   
Firebird is a relational database offering many ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix platforms. Firebird offers excellent concurrency, high performance, and powerful language support for stored procedures and triggers.
          Comentariu la Kdenlive editor video gratuit pentru Windows și Linux de Ioan Ionel   
"This app can't run on your PC. To find a version for your PC, check with the software publisher." Asa imi apare si daca am instalat si reinstalat de 100 ori. Unde Dumnezeu gresesc sau ce se intampla? Este pacat sa sparg acest laptop doar pentru ca sunt idiot! Dar....unde gresesc?
          Navicat SQLite for Linux 10.0.6   
Navicat is a set of graphical database management, reporting and monitoring tools for SQLite database systems. Navicat is easy-to-use, powerful and supports HTTP and SSH for remote SQLite connection.
          大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理   

大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

作者:Stephen Cui

一、大数据分析在商业上的应用

1、体育赛事预测

世界杯期间,谷歌、百度、微软和高盛等公司都推出了比赛结果预测平台。百度预测结果最为亮眼,预测全程64场比赛,准确率为67%,进入淘汰赛后准确率为94%。现在互联网公司取代章鱼保罗试水赛事预测也意味着未来的体育赛事会被大数据预测所掌控。

“在百度对世界杯的预测中,我们一共考虑了团队实力、主场优势、最近表现、世界杯整体表现和博彩公司的赔率等五个因素,这些数据的来源基本都是互联网,随后我们再利用一个由搜索专家设计的机器学习模型来对这些数据进行汇总和分析,进而做出预测结果。”—百度北京大数据实验室的负责人张桐


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

2、股票市场预测

去年英国华威商学院和美国波士顿大学物理系的研究发现,用户通过谷歌搜索的金融关键词或许可以金融市场的走向,相应的投资战略收益高达326%。此前则有专家尝试通过Twitter博文情绪来预测股市波动。

理论上来讲股市预测更加适合美国。中国股票市场无法做到双向盈利,只有股票涨才能盈利,这会吸引一些游资利用信息不对称等情况人为改变股票市场规律,因此中国股市没有相对稳定的规律则很难被预测,且一些对结果产生决定性影响的变量数据根本无法被监控。

目前,美国已经有许多对冲基金采用大数据技术进行投资,并且收获甚丰。中国的中证广发百度百发100指数基金(下称百发100),上线四个多月以来已上涨68%。

和传统量化投资类似,大数据投资也是依靠模型,但模型里的数据变量几何倍地增加了,在原有的金融结构化数据基础上,增加了社交言论、地理信息、卫星监测等非结构化数据,并且将这些非结构化数据进行量化,从而让模型可以吸收。

由于大数据模型对成本要求极高,业内人士认为,大数据将成为共享平台化的服务,数据和技术相当于食材和锅,基金经理和分析师可以通过平台制作自己的策略。

http://v.youku.com/v_show/id_XMzU0ODIxNjg0.html

3、市场物价预测

CPI表征已经发生的物价浮动情况,但统计局数据并不权威。但大数据则可能帮助人们了解未来物价走向,提前预知通货膨胀或经济危机。最典型的案例莫过于马云通过阿里B2B大数据提前知晓亚洲金融危机,当然这是阿里数据团队的功劳。

4、用户行为预测

基于用户搜索行为、浏览行为、评论历史和个人资料等数据,互联网业务可以洞察消费者的整体需求,进而进行针对性的产品生产、改进和营销。《纸牌屋》选择演员和剧情、百度基于用户喜好进行精准广告营销、阿里根据天猫用户特征包下生产线定制产品、亚马逊预测用户点击行为提前发货均是受益于互联网用户行为预测。

购买前的行为信息,可以深度地反映出潜在客户的购买心理和购买意向:例如,客户 A 连续浏览了 5 款电视机,其中 4 款来自国内品牌 S,1 款来自国外品牌 T;4 款为 LED 技术,1 款为 LCD 技术;5 款的价格分别为 4599 元、5199 元、5499 元、5999 元、7999 元;这些行为某种程度上反映了客户 A 对品牌认可度及倾向性,如偏向国产品牌、中等价位的 LED 电视。而客户 B 连续浏览了 6 款电视机,其中 2 款是国外品牌 T,2 款是另一国外品牌 V,2 款是国产品牌 S;4 款为 LED 技术,2 款为 LCD 技术;6 款的价格分别为 5999 元、7999 元、8300 元、9200 元、9999 元、11050 元;类似地,这些行为某种程度上反映了客户 B 对品牌认可度及倾向性,如偏向进口品牌、高价位的 LED 电视等。

http://36kr.com/p/205901.html

5、人体健康预测

中医可以通过望闻问切手段发现一些人体内隐藏的慢性病,甚至看体质便可知晓一个人将来可能会出现什么症状。人体体征变化有一定规律,而慢性病发生前人体已经会有一些持续性异常。理论上来说,如果大数据掌握了这样的异常情况,便可以进行慢性病预测。

6、疾病疫情预测

基于人们的搜索情况、购物行为预测大面积疫情爆发的可能性,最经典的“流感预测”便属于此类。如果来自某个区域的“流感”、“板蓝根”搜索需求越来越多,自然可以推测该处有流感趋势。

Google成功预测冬季流感:
2009年,Google通过分析5000万条美国人最频繁检索的词汇,将之和美国疾病中心在2003年到2008年间季节性流感传播时期的数据进行比较,并建立一个特定的数学模型。最终google成功预测了2009冬季流感的传播甚至可以具体到特定的地区和州。

7、灾害灾难预测

气象预测是最典型的灾难灾害预测。地震、洪涝、高温、暴雨这些自然灾害如果可以利用大数据能力进行更加提前的预测和告知便有助于减灾防灾救灾赈灾。与过往不同的是,过去的数据收集方式存在着死角、成本高等问题,物联网时代可以借助廉价的传感器摄像头和无线通信网络,进行实时的数据监控收集,再利用大数据预测分析,做到更精准的自然灾害预测。

8、环境变迁预测

除了进行短时间微观的天气、灾害预测之外,还可以进行更加长期和宏观的环境和生态变迁预测。森林和农田面积缩小、野生动物植物濒危、海岸线上升,温室效应这些问题是地球面临的“慢性问题“。如果人类知道越多地球生态系统以及天气形态变化数据,就越容易模型化未来环境的变迁,进而阻止不好的转变发生。而大数据帮助人类收集、储存和挖掘更多的地球数据,同时还提供了预测的工具。

9、交通行为预测

基于用户和车辆的LBS定位数据,分析人车出行的个体和群体特征,进行交通行为的预测。交通部门可预测不同时点不同道路的车流量进行智能的车辆调度,或应用潮汐车道;用户则可以根据预测结果选择拥堵几率更低的道路。

百度基于地图应用的LBS预测涵盖范围更广。春运期间预测人们的迁徙趋势指导火车线路和航线的设置,节假日预测景点的人流量指导人们的景区选择,平时还有百度热力图来告诉用户城市商圈、动物园等地点的人流情况,指导用户出行选择和商家的选点选址。

多尔戈夫的团队利用机器学习算法来创造路上行人的模型。无人驾驶汽车行驶的每一英里路程的情况都会被记录下来,汽车电脑就会保持这些数据,并分析各种不同的对象在不同的环境中如何表现。有些司机的行为可能会被设置为固定变量(如“绿灯亮,汽车行”),但是汽车电脑不会死搬硬套这种逻辑,而是从实际的司机行为中进行学习。

这样一来,跟在一辆垃圾运输卡车后面行驶的汽车,如果卡车停止行进,那么汽车可能会选择变道绕过去,而不是也跟着停下来。谷歌已建立了70万英里的行驶数据,这有助于谷歌汽车根据自己的学习经验来调整自己的行为。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

http://www.5lian.cn/html/2014/chelianwang_0522/42125_4.html

10、能源消耗预测

加州电网系统运营中心管理着加州超过80%的电网,向3500万用户每年输送2.89亿兆瓦电力,电力线长度超过25000英里。该中心采用了Space-Time Insight的软件进行智能管理,综合分析来自包括天气、传感器、计量设备等各种数据源的海量数据,预测各地的能源需求变化,进行智能电能调度,平衡全网的电力供应和需求,并对潜在危机做出快速响应。中国智能电网业已在尝试类似大数据预测应用。

二、大数据分析种类 按照数据分析的实时性,分为实时数据分析和离线数据分析两种。

实时数据分析一般用于金融、移动和互联网B2C等产品,往往要求在数秒内返回上亿行数据的分析,从而达到不影响用户体验的目的。要满足这样的需求,可以采用精心设计的传统关系型数据库组成并行处理集群,或者采用一些内存计算平台,或者采用HDD的架构,这些无疑都需要比较高的软硬件成本。目前比较新的海量数据实时分析工具有EMC的Greenplum、SAP的HANA等。

对于大多数反馈时间要求不是那么严苛的应用,比如离线统计分析、机器学习、搜索引擎的反向索引计算、推荐引擎的计算等,应采用离线分析的方式,通过数据采集工具将日志数据导入专用的分析平台。但面对海量数据,传统的ETL工具往往彻底失效,主要原因是数据格式转换的开销太大,在性能上无法满足海量数据的采集需求。互联网企业的海量数据采集工具,有Facebook开源的Scribe、LinkedIn开源的Kafka、淘宝开源的Timetunnel、Hadoop的Chukwa等,均可以满足每秒数百MB的日志数据采集和传输需求,并将这些数据上载到Hadoop中央系统上。

按照大数据的数据量,分为内存级别、BI级别、海量级别三种。

这里的内存级别指的是数据量不超过集群的内存最大值。不要小看今天内存的容量,Facebook缓存在内存的Memcached中的数据高达320TB,而目前的PC服务器,内存也可以超过百GB。因此可以采用一些内存数据库,将热点数据常驻内存之中,从而取得非常快速的分析能力,非常适合实时分析业务。图1是一种实际可行的MongoDB分析架构。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

图1 用于实时分析的MongoDB架构

MongoDB大集群目前存在一些稳定性问题,会发生周期性的写堵塞和主从同步失效,但仍不失为一种潜力十足的可以用于高速数据分析的NoSQL。

此外,目前大多数服务厂商都已经推出了带4GB以上SSD的解决方案,利用内存+SSD,也可以轻易达到内存分析的性能。随着SSD的发展,内存数据分析必然能得到更加广泛的应用。

BI级别指的是那些对于内存来说太大的数据量,但一般可以将其放入传统的BI产品和专门设计的BI数据库之中进行分析。目前主流的BI产品都有支持TB级以上的数据分析方案。种类繁多。

海量级别指的是对于数据库和BI产品已经完全失效或者成本过高的数据量。海量数据级别的优秀企业级产品也有很多,但基于软硬件的成本原因,目前大多数互联网企业采用Hadoop的HDFS分布式文件系统来存储数据,并使用MapReduce进行分析。本文稍后将主要介绍Hadoop上基于MapReduce的一个多维数据分析平台。

三、大数据分析一般过程

3.1 采集

大数据的采集是指利用多个数据库来接收发自客户端(Web、App或者传感器形式等)的 数据,并且用户可以通过这些数据库来进行简单的查询和处理工作。比如,电商会使用传统的关系型数据库mysql和Oracle等来存储每一笔事务数据,除 此之外,Redis和MongoDB这样的NoSQL数据库也常用于数据的采集。

在大数据的采集过程中,其主要特点和挑战是并发数高,因为同时有可能会有成千上万的用户 来进行访问和操作,比如火车票售票网站和淘宝,它们并发的访问量在峰值时达到上百万,所以需要在采集端部署大量数据库才能支撑。并且如何在这些数据库之间 进行负载均衡和分片的确是需要深入的思考和设计。

3.2 导入/预处理

虽然采集端本身会有很多数据库,但是如果要对这些海量数据进行有效的分析,还是应该将这 些来自前端的数据导入到一个集中的大型分布式数据库,或者分布式存储集群,并且可以在导入基础上做一些简单的清洗和预处理工作。也有一些用户会在导入时使 用来自Twitter的Storm来对数据进行流式计算,来满足部分业务的实时计算需求。
导入与预处理过程的特点和挑战主要是导入的数据量大,每秒钟的导入量经常会达到百兆,甚至千兆级别。

3.3 统计/分析

统计与分析主要利用分布式数据库,或者分布式计算集群来对存储于其内的海量数据进行普通 的分析和分类汇总等,以满足大多数常见的分析需求,在这方面,一些实时性需求会用到EMC的GreenPlum、Oracle的Exadata,以及基于 MySQL的列式存储Infobright等,而一些批处理,或者基于半结构化数据的需求可以使用Hadoop。
统计与分析这部分的主要特点和挑战是分析涉及的数据量大,其对系统资源,特别是I/O会有极大的占用。

3.4 挖掘

与前面统计和分析过程不同的是,数据挖掘一般没有什么预先设定好的主题,主要是在现有数 据上面进行基于各种算法的计算,从而起到预测(Predict)的效果,从而实现一些高级别数据分析的需求。比较典型算法有用于聚类的Kmeans、用于 统计学习的SVM和用于分类的NaiveBayes,主要使用的工具有Hadoop的Mahout等。该过程的特点和挑战主要是用于挖掘的算法很复杂,并 且计算涉及的数据量和计算量都很大,常用数据挖掘算法都以单线程为主。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
四、大数据分析工具

4.1 Hadoop

Hadoop 是一个能够对大量数据进行分布式处理的软件框架。但是 Hadoop 是以一种可靠、高效、可伸缩的方式进行处理的。Hadoop 是可靠的,因为它假设计算元素和存储会失败,因此它维护多个工作数据副本,确保能够针对失败的节点重新分布处理。Hadoop 是高效的,因为它以并行的方式工作,通过并行处理加快处理速度。Hadoop 还是可伸缩的,能够处理 PB 级数据。此外,Hadoop 依赖于社区服务器,因此它的成本比较低,任何人都可以使用。

Hadoop是一个能够让用户轻松架构和使用的分布式计算平台。用户可以轻松地在Hadoop上开发和运行处理海量数据的应用程序。它主要有以下几个优点:

高可靠性。Hadoop按位存储和处理数据的能力值得人们信赖。 高扩展性。Hadoop是在可用的计算机集簇间分配数据并完成计算任务的,这些集簇可以方便地扩展到数以千计的节点中。 高效性。Hadoop能够在节点之间动态地移动数据,并保证各个节点的动态平衡,因此处理速度非常快。 高容错性。Hadoop能够自动保存数据的多个副本,并且能够自动将失败的任务重新分配。

Hadoop带有用 Java 语言编写的框架,因此运行在 linux 生产平台上是非常理想的。Hadoop 上的应用程序也可以使用其他语言编写,比如 C++。

4.2 HPCC

HPCC,High Performance Computing and Communications(高性能计算与通信)的缩写。1993年,由美国科学、工程、技术联邦协调理事会向国会提交了“重大挑战项目:高性能计算与 通信”的报告,也就是被称为HPCC计划的报告,即美国总统科学战略项目,其目的是通过加强研究与开发解决一批重要的科学与技术挑战问题。HPCC是美国 实施信息高速公路而上实施的计划,该计划的实施将耗资百亿美元,其主要目标要达到:开发可扩展的计算系统及相关软件,以支持太位级网络传输性能,开发千兆 比特网络技术,扩展研究和教育机构及网络连接能力。

该项目主要由五部分组成:

高性能计算机系统(HPCS),内容包括今后几代计算机系统的研究、系统设计工具、先进的典型系统及原有系统的评价等; 先进软件技术与算法(ASTA),内容有巨大挑战问题的软件支撑、新算法设计、软件分支与工具、计算计算及高性能计算研究中心等; 国家科研与教育网格(NREN),内容有中接站及10亿位级传输的研究与开发; 基本研究与人类资源(BRHR),内容有基础研究、培训、教育及课程教材,被设计通过奖励调查者-开始的,长期 的调查在可升级的高性能计算中来增加创新意识流,通过提高教育和高性能的计算训练和通信来加大熟练的和训练有素的人员的联营,和来提供必需的基础架构来支 持这些调查和研究活动; 信息基础结构技术和应用(IITA),目的在于保证美国在先进信息技术开发方面的领先地位。

4.3 Storm

Storm是自由的开源软件,一个分布式的、容错的实时计算系统。Storm可以非常可靠的处理庞大的数据流,用于处理Hadoop的批量数据。Storm很简单,支持许多种编程语言,使用起来非常有趣。Storm由Twitter开源而来,其它知名的应用企业包括Groupon、淘宝、支付宝、阿里巴巴、乐元素、Admaster等等。

Storm有许多应用领域:实时分析、在线机器学习、不停顿的计算、分布式RPC(远过程调用协议,一种通过网络从远程计算机程序上请求服务)、 ETL(Extraction-Transformation-Loading的缩写,即数据抽取、转换和加载)等等。Storm的处理速度惊人:经测 试,每个节点每秒钟可以处理100万个数据元组。Storm是可扩展、容错,很容易设置和操作。

4.4 Apache Drill

为了帮助企业用户寻找更为有效、加快Hadoop数据查询的方法,Apache软件基金会近日发起了一项名为“Drill”的开源项目。Apache Drill 实现了 Google’s Dremel.

据Hadoop厂商MapRTechnologies公司产品经理Tomer Shiran介绍,“Drill”已经作为Apache孵化器项目来运作,将面向全球软件工程师持续推广。

该项目将会创建出开源版本的谷歌Dremel Hadoop工具(谷歌使用该工具来为Hadoop数据分析工具的互联网应用提速)。而“Drill”将有助于Hadoop用户实现更快查询海量数据集的目的。

“Drill”项目其实也是从谷歌的Dremel项目中获得灵感:该项目帮助谷歌实现海量数据集的分析处理,包括分析抓取Web文档、跟踪安装在Android Market上的应用程序数据、分析垃圾邮件、分析谷歌分布式构建系统上的测试结果等等。

通过开发“Drill”Apache开源项目,组织机构将有望建立Drill所属的API接口和灵活强大的体系架构,从而帮助支持广泛的数据源、数据格式和查询语言。

4.5 RapidMiner

RapidMiner是世界领先的数据挖掘解决方案,在一个非常大的程度上有着先进技术。它数据挖掘任务涉及范围广泛,包括各种数据艺术,能简化数据挖掘过程的设计和评价。

功能和特点

免费提供数据挖掘技术和库 100%用Java代码(可运行在操作系统) 数据挖掘过程简单,强大和直观 内部XML保证了标准化的格式来表示交换数据挖掘过程 可以用简单脚本语言自动进行大规模进程 多层次的数据视图,确保有效和透明的数据 图形用户界面的互动原型 命令行(批处理模式)自动大规模应用 Java API(应用编程接口) 简单的插件和推广机制 强大的可视化引擎,许多尖端的高维数据的可视化建模 400多个数据挖掘运营商支持

耶鲁大学已成功地应用在许多不同的应用领域,包括文本挖掘,多媒体挖掘,功能设计,数据流挖掘,集成开发的方法和分布式数据挖掘。

4.6 Pentaho BI

Pentaho BI 平台不同于传统的BI 产品,它是一个以流程为中心的,面向解决方案(Solution)的框架。其目的在于将一系列企业级BI产品、开源软件、API等等组件集成起来,方便商务智能应用的开发。它的出现,使得一系列的面向商务智能的独立产品如Jfree、Quartz等等,能够集成在一起,构成一项项复杂的、完整的商务智能解决方案。

Pentaho BI 平台,Pentaho Open BI 套件的核心架构和基础,是以流程为中心的,因为其中枢控制器是一个工作流引擎。工作流引擎使用流程定义来定义在BI 平台上执行的商业智能流程。流程可以很容易的被定制,也可以添加新的流程。BI 平台包含组件和报表,用以分析这些流程的性能。目前,Pentaho的主要组成元素包括报表生成、分析、数据挖掘和工作流管理等等。这些组件通过 J2EE、WebService、SOAP、HTTP、Java、javascript、Portals等技术集成到Pentaho平台中来。 Pentaho的发行,主要以Pentaho SDK的形式进行。

Pentaho SDK共包含五个部分:Pentaho平台、Pentaho示例数据库、可独立运行的Pentaho平台、Pentaho解决方案示例和一个预先配制好的 Pentaho网络服务器。其中Pentaho平台是Pentaho平台最主要的部分,囊括了Pentaho平台源代码的主体;Pentaho数据库为 Pentaho平台的正常运行提供的数据服务,包括配置信息、Solution相关的信息等等,对于Pentaho平台来说它不是必须的,通过配置是可以用其它数据库服务取代的;可独立运行的Pentaho平台是Pentaho平台的独立运行模式的示例,它演示了如何使Pentaho平台在没有应用服务器支持的情况下独立运行;

Pentaho解决方案示例是一个Eclipse工程,用来演示如何为Pentaho平台开发相关的商业智能解决方案。

Pentaho BI 平台构建于服务器,引擎和组件的基础之上。这些提供了系统的J2EE 服务器,安全,portal,工作流,规则引擎,图表,协作,内容管理,数据集成,分析和建模功能。这些组件的大部分是基于标准的,可使用其他产品替换之。

4.7 SAS Enterprise Miner

§ 支持整个数据挖掘过程的完备工具集 § 易用的图形界面,适合不同类型的用户快速建模 § 强大的模型管理和评估功能 § 快速便捷的模型发布机制, 促进业务闭环形成 五、数据分析算法

大数据分析主要依靠机器学习和大规模计算。机器学习包括监督学习、非监督学习、强化学习等,而监督学习又包括分类学习、回归学习、排序学习、匹配学习等(见图1)。分类是最常见的机器学习应用问题,比如垃圾邮件过滤、人脸检测、用户画像、文本情感分析、网页归类等,本质上都是分类问题。分类学习也是机器学习领域,研究最彻底、使用最广泛的一个分支。

最近、Fernández-Delgado等人在JMLR(Journal of Machine Learning Research,机器学习顶级期刊)杂志发表了一篇有趣的论文。他们让179种不同的分类学习方法(分类学习算法)在UCI 121个数据集上进行了“大比武”(UCI是机器学习公用数据集,每个数据集的规模都不大)。结果发现Random Forest(随机森林)和SVM(支持向量机)名列第一、第二名,但两者差异不大。在84.3%的数据上、Random Forest压倒了其它90%的方法。也就是说,在大多数情况下,只用Random Forest 或 SVM事情就搞定了。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

https://github.com/linyiqun/DataMiningAlgorithm

KNN

K最近邻算法。给定一些已经训练好的数据,输入一个新的测试数据点,计算包含于此测试数据点的最近的点的分类情况,哪个分类的类型占多数,则此测试点的分类与此相同,所以在这里,有的时候可以复制不同的分类点不同的权重。近的点的权重大点,远的点自然就小点。详细介绍链接

Naive Bayes

朴素贝叶斯算法。朴素贝叶斯算法是贝叶斯算法里面一种比较简单的分类算法,用到了一个比较重要的贝叶斯定理,用一句简单的话概括就是条件概率的相互转换推导。详细介绍链接

朴素贝叶斯分类是一种十分简单的分类算法,叫它朴素贝叶斯分类是因为这种方法的思想真的很朴素,朴素贝叶斯的思想基础是这样的:对于给出的待分类项,求解在此项出现的条件下各个类别出现的概率,哪个最大,就认为此待分类项属于哪个类别。通俗来说,就好比这么个道理,你在街上看到一个黑人,我问你你猜这哥们哪里来的,你十有八九猜非洲。为什么呢?因为黑人中非洲人的比率最高,当然人家也可能是美洲人或亚洲人,但在没有其它可用信息下,我们会选择条件概率最大的类别,这就是朴素贝叶斯的思想基础。

SVM

支持向量机算法。支持向量机算法是一种对线性和非线性数据进行分类的方法,非线性数据进行分类的时候可以通过核函数转为线性的情况再处理。其中的一个关键的步骤是搜索最大边缘超平面。详细介绍链接

Apriori

Apriori算法是关联规则挖掘算法,通过连接和剪枝运算挖掘出频繁项集,然后根据频繁项集得到关联规则,关联规则的导出需要满足最小置信度的要求。详细介绍链接

PageRank

网页重要性/排名算法。PageRank算法最早产生于Google,核心思想是通过网页的入链数作为一个网页好快的判定标准,如果1个网页内部包含了多个指向外部的链接,则PR值将会被均分,PageRank算法也会遭到LinkSpan攻击。详细介绍链接

RandomForest

随机森林算法。算法思想是决策树+boosting.决策树采用的是CART分类回归数,通过组合各个决策树的弱分类器,构成一个最终的强分类器,在构造决策树的时候采取随机数量的样本数和随机的部分属性进行子决策树的构建,避免了过分拟合的现象发生。详细介绍链接

Artificial Neural Network

“神经网络”这个词实际是来自于生物学,而我们所指的神经网络正确的名称应该是“人工神经网络(ANNs)”。
人工神经网络也具有初步的自适应与自组织能力。在学习或训练过程中改变突触权重值,以适应周围环境的要求。同一网络因学习方式及内容不同可具有不同的功能。人工神经网络是一个具有学习能力的系统,可以发展知识,以致超过设计者原有的知识水平。通常,它的学习训练方式可分为两种,一种是有监督或称有导师的学习,这时利用给定的样本标准进行分类或模仿;另一种是无监督学习或称无为导师学习,这时,只规定学习方式或某些规则,则具体的学习内容随系统所处环境 (即输入信号情况)而异,系统可以自动发现环境特征和规律性,具有更近似人脑的功能。 六、 案例

6.1 啤酒与尿布


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

“啤酒与尿布”的故事产生于20世纪90年代的美国沃尔玛超市中,沃尔玛的超市管理人员分析销售数据时发现了一个令人难于理解的现象:在某些特定的情况下,“啤酒”与“尿布”两件看上去毫无关系的商品会经常出现在同一个购物篮中,这种独特的销售现象引起了管理人员的注意,经过后续调查发现,这种现象出现在年轻的父亲身上。

在美国有婴儿的家庭中,一般是母亲在家中照看婴儿,年轻的父亲前去超市购买尿布。父亲在购买尿布的同时,往往会顺便为自己购买啤酒,这样就会出现啤酒与尿布这两件看上去不相干的商品经常会出现在同一个购物篮的现象。如果这个年轻的父亲在卖场只能买到两件商品之一,则他很有可能会放弃购物而到另一家商店, 直到可以一次同时买到啤酒与尿布为止。沃尔玛发现了这一独特的现象,开始在卖场尝试将啤酒与尿布摆放在相同的区域,让年轻的父亲可以同时找到这两件商品,并很快地完成购物;而沃尔玛超市也可以让这些客户一次购买两件商品、而不是一件,从而获得了很好的商品销售收入,这就是“啤酒与尿布” 故事的由来。

当然“啤酒与尿布”的故事必须具有技术方面的支持。1993年美国学者Agrawal提出通过分析购物篮中的商品集合,从而找出商品之间关联关系的关联算法,并根据商品之间的关系,找出客户的购买行为。艾格拉沃从数学及计算机算法角度提 出了商品关联关系的计算方法——Aprior算法。沃尔玛从上个世纪 90 年代尝试将 Aprior算法引入到 POS机数据分析中,并获得了成功,于是产生了“啤酒与尿布”的故事。

6.2 数据分析帮助辛辛那提动物园提高客户满意度


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

辛辛那提动植物园成立于1873年,是世界上著名的动植物园之一,以其物种保护和保存以及高成活率繁殖饲养计划享有极高声誉。它占地面积71英亩,园内有500种动物和3000多种植物,是国内游客人数最多的动植物园之一,曾荣获Zagat十佳动物园,并被《父母》(Parent)杂志评为最受儿童喜欢的动物园,每年接待游客130多万人。

辛辛那提动植物园是一个非营利性组织,是俄亥州同时也是美国国内享受公共补贴最低的动植物园,除去政府补贴,2600万美元年度预算中,自筹资金部分达到三分之二以上。为此,需要不断地寻求增加收入。而要做到这一点,最好办法是为工作人员和游客提供更好的服务,提高游览率。从而实现动植物园与客户和纳税人的双赢。

借助于该方案强大的收集和处理能力、互联能力、分析能力以及随之带来的洞察力,在部署后,企业实现了以下各方面的受益: 帮助动植物园了解每个客户浏览、使用和消费模式,根据时间和地理分布情况采取相应的措施改善游客体验,同时实现营业收入最大化。 根据消费和游览行为对动植物园游客进行细分,针对每一类细分游客开展营销和促销活动,显著提高忠诚度和客户保有量。. 识别消费支出低的游客,针对他们发送具有战略性的直寄广告,同时通过具有创意性的营销和激励计划奖励忠诚客户。 360度全方位了解客户行为,优化营销决策,实施解决方案后头一年节省40,000多美元营销成本,同时强化了可测量的结果。 采用地理分析显示大量未实现预期结果的促销和折扣计划,重新部署资源支持产出率更高的业务活动,动植物园每年节省100,000多美元。 通过强化营销提高整体游览率,2011年至少新增50,000人次“游览”。 提供洞察结果强化运营管理。例如,即将关门前冰激淋销售出现高潮,动植物园决定延长冰激淋摊位营业时间,直到关门为止。这一措施夏季每天可增加2,000美元收入。 与上年相比,餐饮销售增加30.7%,零售销售增加5.9%。 动植物园高层管理团队可以制定更好的决策,不需要 IT 介入或提供支持。 将分析引入会议室,利用直观工具帮助业务人员掌握数据。

6.3 云南昭通警察打中学生事件舆情分析

起因:

5月20日,有网友在微博上爆料称:云南昭通鲁甸二中初二学生孔德政,对着3名到该校出警并准备上车返回的警察说了一句“打电话那个,下来”,车内的两名警员听到动静后下来,追到该学生后就是一顿拳打脚踢。

5月26日,昭通市鲁甸县公安局新闻办回应此事:鲁甸县公安局已对当事民警停止执行职务,对殴打学生的两名协警作出辞退处理,并将根据调查情况依法依规作进一步处理。同时,鲁甸县公安局将加大队伍教育管理力度,坚决防止此类事件的再次发生。

经过:


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

5月26日,事件的舆情热度急剧上升,媒体报道内容侧重于“班主任称此学生平时爱起哄学习成绩差”“被打学生的同学去派出所讨说法”“学校要求学生删除照片”等方面,而学校要求删除图片等行为的曝光让事件舆情有扩大化趋势。

5月26日晚间,新华网发布新闻《警方回应“云南一学生遭2名警察暴打”:民警停职协警辞退》,中央主流网络媒体公布官方处置结果,网易、新浪、腾讯等门户网站予以转发,从而让官方的处置得以较大范围传播。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

昭通警察打中学生事件舆论关注度走势(抽样条数:290条)

总结:

“警察打学生,而且有图有真相,在事发5天后,昭通市鲁甸县警方最终还是站在了舆论的风口浪尖。事发后当地官方积极回应,并于5月26日将涉事人予以处理,果断的责任切割较为有效地抚平了舆论情绪,从而较好地化解了此次舆论危机。

从事件的传播来看,事发时间是5月20日,舆论热议则出现在25日,4天的平静期让鲁甸警方想当然地以为事件就此了结,或许当事人都已淡忘此事。如果不是云南当地活跃网友“直播云南”于5月25日发布关于此事的消息,并被当地传统媒体《生活新报》关注的话,事情或许真的就此结束,然而舆情发展不允许假设的存在。这一点,至少给我们以警示,对微博等自媒体平台上的负面信息要实时监测,对普通草根要监测,对本地实名认证的活跃网友更需监测。从某种角度看,本地实名认证的网友是更为强大的“舆论发动机”,负面消息一旦经他们发布或者转发,所带来的传播和形成的舆论压力更大。

在此事件中,校方也扮演着极为重要的角色。无论是被打学生的班主任,还是学校层面,面对此事件的回应都欠妥当。学校层面的“删除照片”等指示极易招致网友和学生的反感,在此反感情绪下,只会加剧学生传播事件的冲动。班主任口中该学生“学习不好、爱起哄”等负面印象被理解成“该学生活该被打”,在教师整体形象不佳的背景下,班主任的这些言论是责任感缺失的一种体现。校方和班主任的不恰当行为让事件处置难度和舆论引导难度明显增加,实在不该。“ — 人民网舆情监测室主任舆情分析师朱明刚

七、大数据云图展示
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

End.

转载请注明来自36大数据(36dsj.com):36大数据 大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理


          Linux, Python JavaScript Full Stack Engineer for a growing Robotics Start-up Company in San Jose, CA   
CA-San Jose, We are creating a robotics platform with reach from commercial and retail applications to the cloud. Responsibilities •Design and develop our enterprise robotics solutions in an agile environment •Work on every level of the stack from frontend to backend •Take end-to-end ownership of your features and projects •Keep up to date on new technologies and share your knowledge •Participate in the full d
          Firmware Engineer   
CA-Sunnyvale, Senior Firmware Engineer Sunnyvale CA 6 months+ 7+ years engineering experience with 5+ years of embedded SW development in networking devices and applications. Preferred experience in Linux based networking operating systems, such as OpenWRT or RDK-B. 3+ years of experience managing small teams of developers Firmware Engineer Sunnyvale, CA 6 months+ 3+ years of embedded SW development in networki
          /11 PostBooks-nightly/2017-06-30/xTuple-4.11.0RC-g85fb718-Linux64Qt5.tar.bz2   
none
          /CasADi/commits/6a0429a/linux/libcasadi-gcc5-6a0429a.tar.gz   
none
          Микроконтроллеры Renesas RZ/G1C оптимизированы для систем человеко-машинного взаимодействия, работающих под управлением Linux    

Компания Renesas Electronics добавила в семейство микроконтроллеров RZ/G новую линейку RZ/G1C. По словам производителя, микроконтроллеры Renesas RZ/G1C оптимизированы для использования в системах для человеко-машинного взаимодействия, работающих под управлением Linux. Они поддерживают 3D-графику и видео высокой четкости.

Микроконтроллеры Renesas RZ/G1C оптимизированы для систем человеко-машинного взаимодействия, работающих под управлением Linux

В числе областей применения Renesas RZ/G1С названы бытовые приборы с сенсорными экранами и промышленное оборудование с поддержкой машинного зрения и распознавания образов.

Микроконтроллеры Renesas RZ/G1C оптимизированы для систем человеко-машинного взаимодействия, работающих под управлением Linux

Основой RZ/G1C служит ядро ARM Cortex-A7. Наличие множества интерфейсов, включая USB и Gigabit Ethernet, а также полная совместимость между моделями на уровне выводов обеспечивают гибкость при проектировании и выпуске продукции. В конфигурацию микроконтроллеров входит графическое ядро PowerVR SGX531 и видеокодек с поддержкой H.264. Кроме того, производитель отмечает наличие одного аналогового и двух цифровых входов для подключения камер.

Ознакомительные образцы RZ/G1C уже доступны. Линейка включает одноядерные и двухъядерные модели. Серийный выпуск должен начаться в декабре.

Источник: Renesas

Теги:

Комментировать


          Linux on IBM Power Systems Beats Market Growth Performance by 3X   

IBM Corporation logo. (PRNewsFoto/IBM Corporation) (PRNewsFoto/)ARMONK, N.Y., June 29, 2017 /PRNewswire/ --IBM (NYSE: IBM) today announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM has achieved market growth by 3x compared with the total Linux server market which grew at +6...



          /PrimaryReleases/LinuxAIO-Ubuntu/17.04-ZestyZapus/linuxaio-ubuntu1704-mix.iso.md5   
none
          /release/Linux/cudatext_1.12.0.0-1_gtk2_amd64.deb   
none
          Move my website to my dedicated server by famin69   
I am looking to set up cPanel and move my website from another server. (Budget: $8 - $15 USD, Jobs: Linux, MySQL, PHP, Web Hosting, Website Management)
          /releases/4.11/linux-4.11.8-xanmod12_1.170629.tar.xz   
none
          Read and save the Ripple ( XRP) api order book in mysql database by brsystems   
I need to query and save the order-book data from API ripple.com into a mysql database for querying through my system. My whole system runs on debian-linux. The idea is to capture the order book... (Budget: $30 - $250 USD, Jobs: Debian, Linux, MySQL, node.js, PHP)
          WIFI csengő Video HD IP kamera mobil riasztás - Jelenlegi ára: 23 375 Ft   
Main Functions: Brand new and high quality.
This is IP solution Doorbell Camera. It is adaptived Hi3518E chipset to make High resolution image up to 720P 1. 0 Megapixel HD video and images. You will enjoy this full HD vision.
All in one doorbell camera. It is a new generation and revolution compare with traditional doorbell. combining doorbell, interphone, cctv camera and alarm together to make a tiny unique desgin to save rooms and more devices. Only one doorcam would be fullfill all the funcitons, to bring you new kind of experience and live.
Wireless connection. Abandon all the wires and cables. You just need our doorcam and a smart phone(andriod or ios system )few second to connect then, you can see the video anywhere that your mobile under a wifi or network.
It is build in PIR and could be set several kind of alarm ways. realtime to alert your cellphone and you will know who would be in front of your door in real time.
Build in two way audio talk, whenever you are, you could talk with the person that ring your door bell and find a solution in remotely.
There is a sd card slot inside camera doorcam, you can insert a tf card up to 64gb for recording. please note the tf card is not included in this package
Build-in ir leds and dual IR-cut ot switch day or night automatically. the nigh vision is about 5M to 10M.
Reminder: The specific method of installation and use are described in detail in the specification.
ITEMS SPEC
Processor Hi3518E
Access Mode Scanning QR Code to get camera ID, search camera ID, type in camera ID
Control Protocol ONVIF 2. 0 protocol
P2P oversea P2P server
System Security Three access level user management
UID Technology Use API license encryption, high confidential
WiFi Setup Convenient and fast WiFi Setup by mobile phone
Storage Support Max 64GB TF card
Video
Video compression H. 264, Support three-stream
Resolution 720P
Sensor 1/4 inch 720p progressive scan CMOS sensor
Lens/View Angle 2. 8mm
Bit Rate CBR/VBR, output range: 128~4096kbps
Night Vision Dual IR-Cut filter auto switch, 10m IR distance
Audio
Compression format G711/AAC/ADPCM
Dual-Way-Audio Support
Network
Interface 1xRJ45 10/100M ethernet interface
Protocol TCP/IP, HTTP, TCP, UDP, SMTP, FTP, DHCP, DNS, DDNS, NTP, UpnP, RTSP, P2P etc.
Network Function Support Gmail/Yahoo Email alarm, FTP, support website update
Online visitor Support 4 users to watch online video synchronously
WiFi Standard Support IEEE 802. 11b/g/n
Alarm
Alert Way Support motion detection alarm, Email alert , upload image to FTP
Software
Monitor Software iOS(5. 0 or above), Android(2. 3 or above), IE, CMS(Windows)
Operation System Embedded Linux
Customization Support customized software in accordance with client's demand
Description: This product supports functions such as remote real-time video conversation, picture push, remote snapshot, remote recording, two-way audio, exception alert and remote unlocking.
2. when visitors touch the â? ścall buttonâ? ťin the door phone, the door phone camera will snap a picture of the visitor and send a call to your smart phone.
3. You can answer the call to enjoy the real time voice and video chatting with the visitors, and also take photos and record video of the visitors in the APP.
4. If the door phone connects to the electronic LOCK of your door, you can unlock the door with your smart phone. (Package does not include electronic LOCK)
5. If you miss any calls, you can check the records of visitors in your smart phone app.
Package Included: 1 x Power Interface Adapter Cable 2m
1 x Unlock Extended Cable 0. 5
1 x Screw
1 x Specification
1 x WIFI Video Doorbell
NO Retail Box. Packed Safely in Bubble Bag.
P072804
Vásárlással kapcsolatos fontos információk:
Köszöntjük oldalunkon!
Az adásvétel megkönnyítése érdekében, kérjük olvassa el vásárlási feltételeinket, melyeket rendelésével automatikusan elfogad.
Kedvezmény: Amennyiben termékeink közül minimum 50 db-ot vásárol, kedvezményt biztosítunk. Kérjük igényelje a kedvezményt ügyfélszolgálatunktól.
US hálózati csatlakozós termékeink esetén, külön rendelhető a termékeink között található US-EU átalakító adapter.
Fontos! Ha a leírásban NEM szerepel, hogy ? We dont offer color/pattern/size choice? (szín/minta/méret nem választható), akkor rendeléskor kérjük mindenképp írja bele a megjegyzés rovatba a kiválasztott színt/mintát/méretet, ellenkező esetben kollégáink véletlenszerűen postázzák. Ez esetben utólagos reklamációt nem fogadunk el.
Ahol a ? We dont offer color/pattern/size choice? kijelentés szerepel, sajnos nincs lehetőség szín/minta/méret kiválasztására. Ilyenkor kollégáink véletlenszerűen küldik a termékeket.
Kommunikáció: minden esetben kizárólag email-ben, mert így visszakövethetőek a beszélgetések.
Hibás termék: visszautaljuk a vételárat vagy újrapostázzuk a terméket megállapodástól függően, miután visszapostázta a megadott címre.
Visszautalás: a vételárat visszautaljuk, vagy a terméket újraküldjük ha nem érkezik meg a termék.
Ez esetben kérjük jelezze email-en keresztül, hogy megoldást találhassunk a problémára!
Garancia: 3 hónap! Amennyiben valóban hibás a termék, kérjük vegye fel velünk a kapcsolatot és kicseréljük vagy visszavásároljuk a terméket megegyezéstől függően.
Számlázás: Az elektronikus számlát (pdf. formátumú) Angliában regisztrált cégünk állítja ki, az ÁFA nem kimutatható, az utalás magyar céges számlánkra történik.
A szállítási idő: az összeg átutalása után 9-12 munkanap, de a postától függően előfordulhat a 25-35 munkanap is! A posta szállítási idejéért cégünk nem tud felelősséget vállalni, az említett szállítási idő tájékoztató jellegű!
Nagyon fontos! Kérjük ne vásároljanak akkor, ha nem tudják kivárni az esetleges 35 munkanap szállítási időt!
strong>Postázás: Termékeinket külföldről postázzuk.
Nagy raktárkészletünk miatt előfordulhat, hogy egy-két termék átmenetileg vagy véglegesen elfogy raktárunkból, erről mindenképp időben értesítjük és megfelelő megoldást kínálunk.
Utalás: Kizárólag átutalást (házibank, netbank) fogadunk el (bankszámláról bankszámlára),   Banki/Postai készpénz befizetést/Rózsaszín csekket ill. egyéb NEM!
Átutalásnál a rendelésszámot feltétlenül adja meg a közlemény rovatba, ellenkező esetben előfordulhat, hogy nem tudjuk visszakeresni a rendelését. Ebben az esetben nyilvánvalóan nem tudjuk a terméket postázni ill. Önt sem tudjuk értesíteni, hiszen nincs kiindulópontunk!
Fizetés/szállítás:
-2000Ft felett (postaköltséggel együtt) CSAK es KIZÁRÓLAG ajánlottan postázzuk a terméket az alábbiak szerint:
-Ajánlott posta esetén az első termékre a posta 890Ft , minden további 250 Ft/db.
- Sima Levélként 2000Ft alatt: az első termékre a posta 250Ft, minden további termék posta díja 250Ft/db.
Átvétel: azoknak a vásárlóknak akik nem veszik át a rendelt terméket a postától és visszaküldésre kerül a termék cégünkhöz, a postaköltség újbóli megfizetésével tudjuk csak újraküldeni, illetve amennyiben az összeget kéri vissza, a termékek árát tudjuk csak visszautalni, postaköltség nélkül. A termék átvétele az Ön felelőssége! Amennyiben a Mi hibánkból nem tudja átvenni, pl téves címzés miatt, így a postaköltség minket terhel.
Amennyiben a megrendelést követő 24 órán belül nem kap emailt tőlünk, ez azt jelenti, hogy az email cím (freemail és citromail esetén főleg) visszadobta a küldött email-t. Ilyenkor küldjön üzenetet egy másik e-mail címről.
Kellemes Vásárlást Kívánunk!
WIFI csengő Video HD IP kamera mobil riasztás
Jelenlegi ára: 23 375 Ft
Az aukció vége: 2017-07-01 03:01
          ESCAM Mini vezeték nélküli IP kamera Éjjellátó - Jelenlegi ára: 11 668 Ft   
Specifications: ć?  ć ? é˘ć?? 档
System :
OS
Embedded Linux OS
System security
Account, password authority management
Online visitor
Support 4 online visitors simutaneoursly
Lens:
Sensor
1/4 720p Progressive CMOS sensor
CMOS
support AWB, AGC, BLC
SNR
â? Ą439dB
Minimum illumination
0. 8Lux/F1. 4ďĽcolourful model ďĽ?, 0. 3Lux/F1. 4ďĽB&W modeďĽ?
Lens/viewing angle
3. 6mm@F1. 4/49. 13Ëš
Night vision
Dual IR-CUT Filter auto switch, 10pcs 950nmÎ? 10mm LED, 3-10m IR distance
Video:
Compression format
H. 264 High Profile
Resolution
720p/VGA/D1
Bit rate
128~4096kbps
Frame rate
25fps
Picture Adjustment
Contrast, Brightness, Saturation are adjustable.
Audio:
Input
Buit in microphone
Output
Buit in speaker(8Î? 1W)
Network :
Network Protocol
TCP/IP, HTTP, TCP, UDP, SMTP, DHCP, NTP, UPnP, P2P, RTSP, etc.
IP address
Dynamic IP and static IP address
WIFI
WIFI 802. 11 b/g/n
Features:
ONVIF Protocol
ONVIF 2. 0
Mobile View
software for Iphone, Android(IOS4. 3ă? Android OS 2. 3 above)
Privacy Mask
Support up to 4 different area
Alarm:
Alarm detection
Support motion detection
Alarm action
Support E-mail alarm
Storage:
Local Storage
Support up to 64GB Micro SD card
Other:
Rated Voltage
DC5Â? 0. 3V
Temperature
-10~50 °C
Power consumption
rated power:3. 5W(IR opening)/maximum power:7W(P/T working )
Package Included: 1 x IP Camera
1 x Charger
1 x USB Cable
Plug: US, EU, UK.
NO Retail Box. Packed Safely in Bubble Bag.
P059662
Vásárlással kapcsolatos fontos információk:
Köszöntjük oldalunkon!
Az adásvétel megkönnyítése érdekében, kérjük olvassa el vásárlási feltételeinket, melyeket rendelésével automatikusan elfogad.
Kedvezmény: Amennyiben termékeink közül minimum 50 db-ot vásárol, kedvezményt biztosítunk. Kérjük igényelje a kedvezményt ügyfélszolgálatunktól.
US hálózati csatlakozós termékeink esetén, külön rendelhető a termékeink között található US-EU átalakító adapter.
Fontos! Ha a leírásban NEM szerepel, hogy ? We dont offer color/pattern/size choice? (szín/minta/méret nem választható), akkor rendeléskor kérjük mindenképp írja bele a megjegyzés rovatba a kiválasztott színt/mintát/méretet, ellenkező esetben kollégáink véletlenszerűen postázzák. Ez esetben utólagos reklamációt nem fogadunk el.
Ahol a ? We dont offer color/pattern/size choice? kijelentés szerepel, sajnos nincs lehetőség szín/minta/méret kiválasztására. Ilyenkor kollégáink véletlenszerűen küldik a termékeket.
Kommunikáció: minden esetben kizárólag email-ben, mert így visszakövethetőek a beszélgetések.
Hibás termék: visszautaljuk a vételárat vagy újrapostázzuk a terméket megállapodástól függően, miután visszapostázta a megadott címre.
Visszautalás: a vételárat visszautaljuk, vagy a terméket újraküldjük ha nem érkezik meg a termék.
Ez esetben kérjük jelezze email-en keresztül, hogy megoldást találhassunk a problémára!
Garancia: 3 hónap! Amennyiben valóban hibás a termék, kérjük vegye fel velünk a kapcsolatot és kicseréljük vagy visszavásároljuk a terméket megegyezéstől függően.
Számlázás: Az elektronikus számlát (pdf. formátumú) Angliában regisztrált cégünk állítja ki, az ÁFA nem kimutatható, az utalás magyar céges számlánkra történik.
A szállítási idő: az összeg átutalása után 9-12 munkanap, de a postától függően előfordulhat a 25-35 munkanap is! A posta szállítási idejéért cégünk nem tud felelősséget vállalni, az említett szállítási idő tájékoztató jellegű!
Nagyon fontos! Kérjük ne vásároljanak akkor, ha nem tudják kivárni az esetleges 35 munkanap szállítási időt!
strong>Postázás: Termékeinket külföldről postázzuk.
Nagy raktárkészletünk miatt előfordulhat, hogy egy-két termék átmenetileg vagy véglegesen elfogy raktárunkból, erről mindenképp időben értesítjük és megfelelő megoldást kínálunk.
Utalás: Kizárólag átutalást (házibank, netbank) fogadunk el (bankszámláról bankszámlára),   Banki/Postai készpénz befizetést/Rózsaszín csekket ill. egyéb NEM!
Átutalásnál a rendelésszámot feltétlenül adja meg a közlemény rovatba, ellenkező esetben előfordulhat, hogy nem tudjuk visszakeresni a rendelését. Ebben az esetben nyilvánvalóan nem tudjuk a terméket postázni ill. Önt sem tudjuk értesíteni, hiszen nincs kiindulópontunk!
Fizetés/szállítás:
-2000Ft felett (postaköltséggel együtt) CSAK es KIZÁRÓLAG ajánlottan postázzuk a terméket az alábbiak szerint:
-Ajánlott posta esetén az első termékre a posta 890Ft , minden további 250 Ft/db.
- Sima Levélként 2000Ft alatt: az első termékre a posta 250Ft, minden további termék posta díja 250Ft/db.
Átvétel: azoknak a vásárlóknak akik nem veszik át a rendelt terméket a postától és visszaküldésre kerül a termék cégünkhöz, a postaköltség újbóli megfizetésével tudjuk csak újraküldeni, illetve amennyiben az összeget kéri vissza, a termékek árát tudjuk csak visszautalni, postaköltség nélkül. A termék átvétele az Ön felelőssége! Amennyiben a Mi hibánkból nem tudja átvenni, pl téves címzés miatt, így a postaköltség minket terhel.
Amennyiben a megrendelést követő 24 órán belül nem kap emailt tőlünk, ez azt jelenti, hogy az email cím (freemail és citromail esetén főleg) visszadobta a küldött email-t. Ilyenkor küldjön üzenetet egy másik e-mail címről.
Kellemes Vásárlást Kívánunk!
ESCAM  Mini vezeték nélküli IP kamera Éjjellátó
Jelenlegi ára: 11 668 Ft
Az aukció vége: 2017-07-01 03:00
          Pick From Distinct Glasses For Men A Variety Of Events   
Today let's have a look at exactly why using the same figures over and over is challenging. Across all Platforms: Windows/Linux/Mac Desktops/Laptops. We have now to consider these additional stacks which you have made.

Another thing you certainly can Beatriz do Prado Rodrigues (check out your url) which can help you correct your vision obviously is by doing particular exercise routines. Springtime might arriving early in the day, resulting in months of unhappiness people with airborne allergies. Whichever case or web page you visit, the operative phrase is actually «show.» Users can discuss images, films, links, news bits, and all the rest of it which relates to a category.

Going completely to when O. With all the intense popularity of all sports in our country in addition to world over actually, not just with earlier followers, but with young impressionable ones alike, it is disturbing to some a large number of sports super movie stars were stepping into a whole lot trouble recently. We have a supervisor to attempt to hold happy. When you reach finally your target body weight you're going to need resume normal eating patterns.

Whenever ingested, these particles go to are employed in the body destroying bacteria and infections therefore we may enjoy health naturally. If you've had a pet guinea pig, share your own tale here. By using the proper fish oil for canines dose, centered on fat, get older, any dog may benefit from outcomes of omega-3 acids in an everyday diet. Ever since then click cap became the favorite with quite a few sports lovers. Each isle features unique attractions and activities to fairly share using its friends.

Another great option to remind your web visitors to come back to your site, is by offering a month-to-month or quarterly news letter to keep them up to date along with your business development and offers. One of the better approaches to repeat this is always to have a very good grasp where are the best businesses looking. When we make use of items which tend to be organic, the environmental surroundings is going to be more content and healthy and will we. Hand shield is the ideal remedy that offers the help that your hand should remain safe and intact.

They deny that there surely is a crisis in the event that debt ceiling is certainly not brought up in the same way vehemently as they refuse that guy influences weather modification. When you have a beneficial credit rating, fantastic! If you purchase best diy guide It will probably just take couple of hours and requires no mechanic skills.

It is important to do the analysis prior to making this devotion. The report furthermore alleges that Andy Dick had Xanax and cannabis inside the jeans' pouches. This may in addition assist you to determine which guides are providing more results as opposed to others.
          Know Your website   
Here are few Online tools to know your website

1) Web Hosting India
100 GB Space, 2 free domains, host 10 sites, Windows/Linux Rs. 300/m
www.NetwayWeb.net

2) Just-Ping.com - It will help you to know your website or blog is accessible from different cities of the world

3) WhoIsTheOwner.net - To know the contact address, email and phone number of the website owner.

4) YouGetSignal.com - The web server that is hosting your website may also be housing dozens of other websites

5) WhoIsHostingThis.com - Enter the URL of any website and this online service will show you the name of the company that’s hosting the website.


6) SocialMeter.com - This service helps you determine the popularity of a web page on social sites like Digg, delicious, Google Bookmarks, etc

7) BuiltWith.com - Is Digg running on Apache or Windows Servers ? What advertising programs is Arrington using to monetize TechCrunch ? Is Google using Urchin for web analytics ? Is CNN using Akamai ? For answers to all these questions, refer to BuiltWith - a website profiling service that tells you about all the technologies used in creating a website(Source: Labnol)

          Projetos para Raspberry Pi - Sobrecarregue o seu PI   

Sobrecarregue seu Raspberry Pi

 

Prepare seu ferro de solda

 

 

http://cdn.mos.cms.futurecdn.net/f1890251779e708f600bf2a5a3a827fe-320-80.jpg

Nota: Nosso supercarregar seu artigo Raspberry Pi foi totalmente atualizado. Este recurso foi publicado pela primeira vez em fevereiro de 2013.

 

Adoramos o Raspberry Pi em todas as suas formas desde que ele foi lançado em 2012. E é cada vez mais óbvio que o resto do mundo adora esses dispositivos também. Quando o Pi apareceu pela primeira vez, não pensamos que ninguém além de entusiastas e educadores ...

 

 

O Raspberry Pi atingiu um acorde com hobbyistas em todo o mundo, de forma que nenhum outro dispositivo nos últimos anos tem. A produção inicial de todos os vários modelos - desde o original Raspberry Pi até o modelo Pi Zero mais recente - vende-se tão rapidamente que a maioria de nós tem que aguardar alguns meses antes de serem geralmente disponíveis, embora mesmo assim eles vendam Tão rápido quanto as fábricas podem fazê-los.

 

 

Isso não é surpreendente, dado que é um computador totalmente funcional capaz de executar o Linux e - no caso do Pi 2 - até o Windows 10. O Pi Zero pode ser comprado por cinco libras, enquanto por £ 30 (cerca de US $ 45, AU $ 60) você pode obter um quad-core Raspberry Pi 2 com 1GB de RAM e toda a conectividade que você precisa em uma placa de tamanho de cartão de crédito. Não é de admirar que o objetivo do projeto de revolucionar a educação informática desatualizada no Reino Unido parece estar funcionando.

Uma coisa é certa, porém - o mundo do hacker de hardware amador nunca foi o mesmo desde que o Pi apareceu pela primeira vez. Esses sistemas diminutivos, mas totalmente funcionais, são perfeitos para adicionar energia de processamento a locais incomuns, onde o espaço e a eletricidade são excelentes.

 

Descubra o que mais você pode fazer com o pequeno PC, explorando nossa coleção de Projetos Raspberry Pi

 

O recente projeto Astro Pi é apenas o último exemplo de Pi ser enviada para o espaço, enquanto a FishPi os vê preparados para atravessar o oceano, mas eles também estão encontrando usos em ambientes mais mundanos, ajudando a preparar cerveja doméstica ou a conduzir carros de controle remoto. Vamos analisar alguns projetos legais para o Pi e apresentá-lo às técnicas que você precisa para transformar o seu no dispositivo de seus sonhos.

 

Graças à versatilidade e à profundidade das ferramentas Linux, é fácil sintonizar o seu Pi para ser qualquer coisa, desde um computador de mesa até um centro de mídia ou controlador de hardware.

 

 

Guia Distro

 

Como você provavelmente espera, há uma ampla gama de sistemas operacionais - conhecidos como distros - disponíveis no Pi, e os novos parecem aparecer todas as semanas. Aqui, vamos dar uma olhada em alguns dos mais populares, bem como em alguns dos novos.

Você instala uma distro de maneira ligeiramente diferente do que em um computador normal. Uma vez que tudo funciona com um cartão SD, tudo o que você precisa fazer é escrever o novo sistema operacional neste cartão. A maneira mais simples é usarNOOBS, ou você também pode gravar imagens de outras distribuições compatíveis no cartão.

Se você estiver executando o Windows, Win32DiskImager é seu melhor amigo, enquanto os usuários do OS X e Linux podem usar a ferramenta de linha de comando dd. Esta ferramenta faz uma cópia bit-for-bit de dados entre um dispositivo e um arquivo (ou para esse assunto, dois arquivos ou dois dispositivos).

As distros são fornecidas como arquivos de imagem (um pouco como arquivos ISO para CDs) que podem ser gravados no disco, depois de serem descompactados, se necessário, com:

$ sudo dd if= of= bs=4k 
$ sudo sync

 

A segunda linha garante que todos os dados estejam escritos no cartão e não estão presos em quaisquer buffers. Assim, por exemplo, no nosso computador de teste, que possui dois discos rígidos (sda e sdb), o cartão SD aparece como dev / sdc, então substitua por dev / sdc. Se você não tiver certeza de qual dispositivo seu cartão SD é, rundf -h no terminal, e irá listar todos os dispositivos. Você deve poder ver qual é.

Da mesma forma, refere-se ao caminho completo e nome do arquivo do seu arquivo de imagem - por exemplo / home / nick / downloads / 2015-11-21-raspbian-jessie.img.

Para fazer backup de sua configuração do Raspberry Pi, você pode criar um novo arquivo de imagem ao inverter o código do arquivo if (arquivo de entrada) e do arquivo de saída no comando dd. Isso é:

 

 

$ sudo dd if= of= bs=4k

Esta imagem pode então ser comprimida, usando gzip ou bzip para que não ocupe muito espaço no disco rígido.

A maioria das pessoas usam Raspbian

 

Raspbian

 

Esta é a distribuição recomendada pela Fundação Raspberry Pi. A menos que você tenha um bom motivo para usar um diferente, provavelmente é sua melhor aposta. A última versão é baseada no Debian 8 (codinome 'Jessie'), e assim você pode instalar facilmente qualquer coisa dos enormes depósitos Debian.

O ambiente de trabalho padrão é o LXDE, que é muito leve, mas um pouco básico para alguns gostos. O Xfce está disponível para pessoas que gostam de mais algumas vantagens gráficas. Raspbian também possui o programa raspi-config, que provavelmente é a maneira mais fácil de configurar seu Pi.

O Raspberry Pi foi projetado para levar as crianças à programação, e Raspbian foi projetado com isso em mente. Você encontrará Idle (um Python IDE) e Scratch (um ambiente de programação para crianças pequenas) na área de trabalho - veja o guia do iniciante para programação para mais detalhes. Você pode baixar o distrohere.

 

Arch Linux

 

Enquanto o Raspbian foi criado para tentar proteger os usuários da configuração interna do sistema operacional, o Arch Linux foi projetado para ajudar os usuários a entender como o sistema funciona. Versões especiais para o processador ARP do Pi podem ser baixadas da archlinuxarm.org - escolha ARMv6 para Pi e Pi Zero original e ARMv7> Broadcom para o Raspberry Pi 2.

A imagem inicial inclui apenas o sistema básico para que seu Pi seja executado e conectado à rede. Não inclui muito do software que você pode precisar para usar o sistema, como, por exemplo, um ambiente gráfico. Você deve encontrar todas as informações que você precisa no Arch Linux Wiki.

Tirá-lo desse estado inicial para um sistema de trabalho exigirá um pouco de trabalho, mas ao longo do caminho, você aprenderá sobre como os internos de uma distro Linux se encaixam. Seja ou não isso vale a pena todo o trabalho, é claro, depende de você.

 

OSMC

 

O Raspberry Pi pode ter sido projetado como uma ferramenta educacional, mas os hobbyists foram bastante rápidos para torná-lo um brinquedo. Esta distro foi projetada para transformar seu Pi em um centro de mídia que pode ser usado para controlar sua TV.

É baseado no Kodi, que permite que você reproduza músicas e vídeos que você tenha como arquivos, ou transmita-os pela internet. A imagem pode ser baixada de: https://osmc.tv/download. Quanto aos detalhes de como instalar e configurá-lo, abordaremos isso um pouco mais tarde neste artigo.

Se você tiver uma configuração de back-end MythTV, você pode usar o Kodi para fornecer uma interface frontal. Dependendo do que você quer jogar, talvez seja necessário comprar os pacotes de codecs que fornecem acesso a algoritmos de áudio e vídeo protegidos por patente.

 

Android

 

Uma versão oficial do Android - aprovada pela Fundação Raspberry Pi - morreu uma morte tranquila depois de ter sido anunciada pela primeira vez em 2012. No seu lugar, a comunidade vem trabalhando em uma versão não oficial. O desempenho é dificultado pela falta de suporte de aceleração de hardware (os desenvolvedores descrevem isso como "pouco utilizável"), mas está disponível agora. 

 

 

Instalando o Raspbian

 

Para a maioria das pessoas que o usam, Raspbian será o rosto gráfico do Raspberry Pi. Pode ser obtido e instalado em um cartão SD seguindo as instruções na página anterior.

Uma vez que está funcionando, é uma boa idéia pegar as versões mais recentes de todo o software conectando seu Pi à Internet e fazendo uma das duas coisas - abra Menu> Preferências> Adicionar / Remover Software e escolha Opções> Atualizar listas de pacotes Seguido de Opções> Verificar atualizações ... ou abrir um Terminal e digitar os dois comandos a seguir:

 

$ sudo apt-get update

$ sudo apt-get upgrade

 

O recurso assassino do Raspbian é o programa raspi-config, que pode ser executado a qualquer momento digitando sudo raspi-config em um terminal. Mais uma vez há um front-end mais amigável disponível através do ambiente de trabalho padrão - clique em Menu> Preferências> Configuração do Raspberry Pi para acessar a GUI.

 

Tem algumas opções, mas as mais importantes são:

 

Expandir o Sistema de Arquivos - por causa do modo como o Raspbian está instalado, ele só criará um sistema de arquivos de 2 GB, portanto, se você tiver um cartão maior, qualquer espaço restante permanecerá não utilizado. Você pode usar essa opção para expandir o sistema de arquivos para tirar proveito de qualquer espaço desperdiçado. Clique em Expandir o Sistema de Arquivos na guia Sistema para conseguir isso na GUI.

 

Opções de inicialização - isso muda se o seu Pi se inicia em um ambiente gráfico ou um texto. Escolha "Para Desktop" ou "Para CLI", respectivamente, da guia Sistema da GUI para obter o efeito desejado.

 

Overclock - obtenha um desempenho extra sem custo extra! Veja a seção abaixo para obter mais detalhes - esta opção está na guia Desempenho da GUI.

 

Opções avançadas>  Overscan - esta opção pode ser usada em alguns displays para expandir os gráficos para preencher a tela inteira. Você pode ignorá-lo com segurança, a menos que tenha problemas.

 

Opções avançadas> Divisão de memória - o Raspberry Pi usa o mesmo pedaço de memória tanto para o processador principal quanto para o chip gráfico. Usando esta opção (memória GPU sob a guia Desempenho na GUI), você pode especificar o valor a alocar aos gráficos. Normalmente, você diminui essa figura se você estivesse executando o Pi sem cabeça, permitindo que você liberasse mais memória para o processador principal.

 

O software instalado foi reduzido ao mínimo. Esta é uma boa idéia, mas você pode achar que as ferramentas que você usa em outras distros de desktop não estão lá. Felizmente, como o Raspbian está vinculado aos repositórios Debian Armhf, você tem acesso a mais programas do que provavelmente você precisará. Basta abrir o Menu e escolher Preferências> Adicionar / Remover Software.

 

 

Overclocking

 

O processador no coração do Raspberry Pi é projetado para ser executado em qualquer coisa, desde 700MHz (modelos mais antigos) até 900MHz (o novo Pi 2 e Pi Zero). Em outras palavras, para realizar entre 700.000.000 e 900.000.000 de operações por segundo nos modelos monocêntricos e - em teoria - até 3.600.000.000 de operações por segundo no modelo quad-core do Pi 2 (na prática é improvável que faça isso).

 

 

Você pode overclockar seu Raspberry Pi diretamente da área de trabalho LXDE

 

Claro, 'projetado para executar' não significa 'tem que executar'. Você pode aumentar essa velocidade. No entanto, fazer isso aumentará o consumo de energia, o que, por sua vez, aumenta a quantidade de calor gerada. Se ficar muito quente, é susceptível de acabar com uma pilha de silício de fumo, em vez de um processador funcional.

Felizmente, as versões posteriores do Raspbian agora incluem uma ferramenta para ajudá-lo a acelerar a velocidade enquanto mantém um olho na temperatura. Uma vez que esta é uma ferramenta oficial, o uso dela não invalidará sua garantia (ao contrário dos métodos não oficiais anteriores). Tenha em atenção que isso não se aplica ao Pi Zero - já está no seu máximo teórico, e a tentativa de overclock ele irá diminuir a velocidade.

 

O overclocking do seu Pi é simplesmente uma questão de abrir a ferramenta Configuração Raspberry Pi no Menu> Preferências, alternando para a guia Desempenho e selecionando a velocidade de clock mais rápida no menu suspenso Overclock.

Se você achar que seu Pi se torna instável, reinicie com a tecla [Shift] pressionada para desativar o overclocking, então mude a opção novamente. A configuração máxima deve dar aos proprietários do modelo original Pi uma enorme velocidade extra de 50%, que encontramos, faz uma diferença real na experiência do usuário da área de trabalho, especialmente para a navegação na web.

Se você quiser manter o olho na temperatura do seu núcleo, você pode adicionar o widget de temperatura ao painel LXDE. No entanto, seu Pi desligará automaticamente o overclocking uma vez que ele atinja 85 graus C.

 

OSMC

 

Você pode instalar um media player, como o VLC, no Raspbian e usar isso para reproduzir vídeos. Isso funciona bem se você estiver usando seu Pi como um computador geral e dando-lhe funções multimídia ocasionais. No entanto, o pequeno tamanho do hardware, e o fato de que ele é executado silenciosamente, faz dele uma boa opção para construir seu próprio centro de entretenimento.

 

 

As novas placas da revisão 2 têm furos de montagem para ajudá-lo a manter seu centro de entretenimento arrumado

 

Você pode começar com o Raspbian e personalizá-lo para suas necessidades, e esta é uma boa idéia se você tiver alguma função incomum em mente. No entanto, se você estiver olhando para pressionar um Pi para o serviço como um centro de mídia exclusivo, a vida não poderia ser mais simples do que usar o OSMC.

O OSMC é construído no centro de mídia de fonte aberta do Kodi e é incrivelmente fácil de instalar. Vá para a página de download para pegar o instalador para Windows, Mac ou Linux, o que fará o trabalho duro de copiar o OSMC para seu cartão microSD. Uma vez feito, coloque-o no seu Pi e arranque.

Você será levado direto para a interface de desktop Kodi, onde você pode começar a configurá-lo e adicionar suas próprias bibliotecas de mídia usando a mídia armazenada localmente em uma unidade USB ou de rede (veja abaixo). Você também pode instalar complementos para acessar a mídia de outros lugares - incluindo fluxos de TV catch-up e mais além.

 

Uma vez que o seu centro de mídia equipado com Pi esteja sob sua TV, controlá-lo usando um mouse e teclado não será prático - você poderia usar um modelo sem fio, é claro, ou simplesmente instalar um controle remoto amigável para o seu smartphone Android ou Apple.

 

Esteja ciente de que o OSMC pode desenhar o máximo de energia que seu Pi pode reunir - antes de conectar qualquer periférico USB. Faz sentido, portanto, anexar um hub USB alimentado ao seu Pi e, em seguida, conectar qualquer unidade externa a isso.

 

O OSMC permite que você acesse toda a coleção de mídia digital na torneira

 

Continuando

 

It's possible to take complete control of your TV viewing using Linux, including watching live TV, and recording shows for later. This can be done using MythTV.

 

You'll need a separate computer with the appropriate cable connections to act as the server. A word of warning, though: MythTV is renowned for its pernickety installation. The stresses of this procedure are responsible for more than a few grey hairs.

You can play video files that you have stored on other computers on your network, for example those on a Network Attached Storage (NAS) box. The exact method for doing this will vary depending on how you share the files, but they are configured through the Add Sources buttons. For more information on this, check out the Kodi wiki.

 

 

Faça backup de suas fotos usando seu Pi

 

O tamanho do Raspberry Pi significa que podemos usá-lo para assumir o controle de outros dispositivos embutidos. Isso pode parecer um pouco redundante - os dispositivos incorporados, obviamente, já possuem alguma forma de controlador - mas isso significa que podemos script e estendê-los de maneiras que não são possíveis (ou são, pelo menos, muito difíceis) sem o dispositivo extra.

Quase tudo o que você pode conectar a uma área de trabalho normal pode ser roteado por um Pi, mas vamos olhar as câmeras por alguns motivos. Em primeiro lugar, há suporte para a maioria no Linux e, em segundo lugar, há uma série de projetos úteis que você pode fazer uma vez que você compreendeu o básico.

 

 

 

Nessa forma, não é muito portátil, mas com um pouco de DIY judicioso, você poderá empacotar seu Pi mais convenientemente

A melhor ferramenta de linha de comando para manipular câmeras no Linux é Gphoto2. Obtenha com:

 

$ apt-get install gphoto2

 

Antes de ficar preso ao projeto, vamos dar uma olhada nesta ferramenta útil para ver o que pode fazer.

O ambiente de trabalho pode tentar montar a câmera, e isso pode causar Gphoto2 alguns problemas, então a coisa mais fácil é executar sem ela. Abra um terminal e execute sudo raspi-conf e, em Opções de inicialização, selecione Console B1 e reinicie.

 

No nosso sistema de teste, descobrimos que, funcionando dessa forma, poderíamos executar tudo fora da fonte de alimentação do Pi, mas se tentássemos usar um mouse também, precisávamos atualizar para um hub alimentado. Obviamente, isso dependerá dos detalhes dos seus periféricos e da fonte de alimentação.

No novo ambiente somente de texto, conecte sua câmera e execute:

 

$ gphoto2 --auto-detect

 

Isso tentará encontrar qualquer câmera conectada ao Pi. Felizmente, ele vai pegar o seu. Embora suporte uma série impressionante, existem algumas câmeras que não funcionarão. Se o seu for um dos poucos desafortunados, você precisará implorar, roubar ou emprestar um de um amigo antes de continuar.

Nem todas as câmeras suportadas são iguais, e o próximo passo é ver o que a câmera pode fazer. Para listar as ações disponíveis, execute:

 

$ gphoto2 --auto-detect –abilities

 

Há, em termos gerais, duas classes principais de habilidades: captura e upload / download. Os primeiros permitem tirar fotos com seus scripts e estão presentes principalmente em câmeras de alta qualidade. O último permite lidar com fotos armazenadas no cartão de memória e estão presentes na maioria das câmeras suportadas. Neste projeto, lidamos apenas com o segundo conjunto de habilidades.

O comando mais simples que podemos enviar para a câmera é obter todas as fotos armazenadas nele. Isto é:

 

$ gphoto2 --auto-detect --get-all-files

 

 

A execução deste irá baixar todos os arquivos da câmera para o diretório atual. Isso seria bom em um computador normal, mas talvez você não queira fazê-lo em um Pi, pois corre o risco de preencher seu cartão de memória muito rapidamente. Em vez disso, nós os copiaremos em uma chave USB.

Para fazer isso em uma sessão interativa, você pode simplesmente usar uma ferramenta GUI para montar o stick e depois executar df -h para ver onde a placa USB está montada e cd para o diretório.

 

No entanto, uma vez que isso será executado automaticamente, precisamos saber onde o dispositivo será.

 

Existem algumas maneiras de fazer isso, mas vamos manter isso simples. Vamos montar a primeira partição do primeiro disco serial e armazenar as fotos lá. Aqui, assumimos que você está usando o usuário padrão pi. Se você não estiver, você precisará ajustar o script.

Primeiro, precisamos criar um ponto de montagem para a unidade. Esta é apenas uma pasta, e pode ser colocada em qualquer lugar - vamos invadir a convenção e colocá-la em nossa pasta inicial. Então, antes de executar o script, execute:

 

$ mkdir /home/pi/pic_mount

 

Com isso feito, estamos prontos para ir. O script para montar a unidade e obter as fotos é:

 

#!/bin/bash 

if mount /dev/sda1 /home/pi/pic_mount ; then 
echo "Partition mounted" 
cd /home/pi/pic_mount 
yes 'n' | gphoto2 -- auto-detect --get-all-files 
umount /dev/sda1 
else 
echo "/dev/sda1 could not be mounted" 
fi

 

yes 'n' É um comando que simplesmente emite um fluxo de n caracteres. Isso significa que quando o Gphoto2 solicitará substituir os arquivos baixados anteriormente, ele irá diminuir. O umount é essencial, porque garante que o drive esteja corretamente sincronizado e possa ser removido.

Chamamos o script get-pics.sh e o guardamos no diretório inicial do Pi. Para torná-lo executável, execute:

 

$ chmod +x /home/pi/get-pics.sh

 

Agora você deve poder executá-lo manualmente. Você precisará usar sudo porque precisa montar a unidade.

A peça final do quebra-cabeça é fazer com que o script seja executado automaticamente. Para fazer isso, adicione-o ao arquivo /etc/rc.local. Este script é executado quando você inicializa e ele é executado como root, então não há necessidade de se preocupar com as permissões.

A maneira mais rápida de abrir o arquivo como root é a seguinte:

$ sudo nano /etc/rc.local

 

Uma vez feito, adicione esta linha imediatamente antes da saída da linha 0:

 

/home/pi/get-pics.sh 
///end code///

 

Agora, tudo o que você precisa fazer é ligar a sua câmera (certificando-se de que ela está ligada) e USB stick, e ele irá fazer backup de suas fotos quando você inicializar.

Continuando

Se você quiser executar o dispositivo sem cabeça, como provavelmente será o caso, você poderia anexar LEDs aos pinos GPIO, conforme mostrado mais adiante neste artigo, e use estes para indicar os status. Além de salvar imagens na placa USB, você pode enviá-las para um serviço on-line, como o Flickr. Consulte a seção sobre redes sem fio na próxima página para obter informações sobre como conectar seu Pi ao seu telefone.

Você poderia incluir algum tipo de opção para dizer ao seu Pi quais fotos para carregar e para armazenar na placa USB - por exemplo, carregar imagens de baixa resolução e armazenar de alta resolução. Ou você pode criar versões de baixa resolução das imagens e carregá-las.

Gphoto2 tem muito mais recursos do que usamos aqui, incluindo ligações para Java e Python. Para obter detalhes completos, confira o site do projeto aqui.

 

Claro, você não precisa parar por aí. Se você tiver um dongle sem fio no seu Pi, você poderia usá-lo para executar um servidor HTTP. Com algum script PHP (ou outro idioma da web), você pode criar uma interface para GPhoto2 que permitirá que você se conecte a partir do seu celular.

Tomando-o em uma direção diferente, se sua câmera suportar opções de captura, você pode usar seu Pi para tirar fotos e copiá-las.

 

 

Alimentando seu Pi

 

O Raspberry Pi obtém o poder da sua porta microUSB. Isso fornece 5V, e a Fundação Raspberry Pi recomenda uma corrente disponível de pelo menos 700mA. Isso pode ser facilmente entregue através de um adaptador de rede ou um cabo USB de um computador.

Se você deseja que seu Pi seja portátil, então há outras opções. Quatro baterias AA devem fornecer energia suficiente, desde que você tenha a caixa apropriada e os cabos para obter a energia na porta microUSB.

No entanto, achamos que a melhor solução foi obter uma fonte de alimentação de backup para um telefone celular que se conecta diretamente ao Pi.

 

Lei de Ohm

 

Existem duas formas principais de medir a eletricidade - tensão e corrente. A tensão (medida em volts) é a quantidade de energia que uma dada quantidade de elétrons possui, enquanto a corrente (medida em amperes) é a quantidade de elétrons que passam por um ponto. Os dois estão intimamente conectados pela lei de Ohm que afirma: Voltagem = Corrente x Resistência, ou V = IR.

Você pode usar essa conexão para se certificar de que não brinde acidentalmente o seu Pi, pressionando muita corrente nele. A configuração exata do Pi é um pouco complexo. Se você quiser aprofundar, Gert van Loo (um dos designers) juntou uma explicação, que pode ser encontrada aqui.

Como uma regra geral, você pode tentar tirar a tensão de um pino GPIO em 3.3V e não deve desenhar mais de 16mA, ou empurrar mais do que isso para um pino de entrada. Lembre-se, esta é a corrente máxima, então você deve tentar usar menos.

Então, com a lei de Ohm, conhecemos V = IR, então R = V / I. Se colocarmos os dados do Pi e queremos garantir que não o prejudiquemos, sabemos que R deve ser maior que 3.3 / 0.016, o que é 206.25 Ohms. Lembre-se, esta é a menor quantidade de resistência que é segura usar com uma saída GPIO.

Você deve apontar para uma margem de segurança várias vezes acima disso, a menos que seja absolutamente necessário. Em nossos circuitos, usamos 1,000 Ohms, o que nos dá um fator de segurança de quase cinco.

 

 

 

Conecte seu telefone Android ao seu Pi e você pode usá-lo para redes sem fio

 

Rede

 

Todos os modelos Raspberry Pi - com exceção do Pi Zero - vem com uma conexão Ethernet com fio, o que é bom para a maioria das ocasiões, mas às vezes o cabo simplesmente não alcançará. Você poderia usar um dongle sem fio USB. No entanto, se você tiver um telefone Android e sua operadora não desativou o recurso, você pode usar isso como seu dispositivo de rede.

Isso tem uma vantagem extra de não tirar tanto poder do Pi, e assim torna mais fácil ao correr de baterias. Você deve poder compartilhar a conexão do seu telefone com o Wi-Fi, bem como com a 3G, portanto, ele não irá necessariamente comer na sua reserva de dados.

Claro, é melhor verificar o tipo de conexão antes de baixar arquivos grandes. Para fazer isso, conecte seu telefone ao seu Pi e ative o armazenamento USB do seu telefone em Configurações> Sem fio e Redes> Tethering e Mobile Hotspot. Se permanecer aceso, tente um cabo diferente.

De volta ao Pi, se você digitar sudo ifconfig, então você deve ver a interface usb0 listada, mas não terá necessariamente um endereço IP. As interfaces de rede são controladas pelo arquivo / etc / network / interfaces. Por padrão, pode não haver uma entrada aqui para redes USB, então você precisará configurar uma.

Abra o arquivo com seu editor de texto favorito como sudo - por exemplo, sudo nano / etc / network / interfaces - e adicione as linhas:

 

iface usb0 inet dhcp 
nameserver 208.67.220.220 
nameserver 208.67.222.222

 

Isso usa os servidores de nomes OpenDNS, mas você pode usar outros se desejar. Agora você pode reiniciar as interfaces ou reiniciar seu Pi para escolher as mudanças. Você deve ter uma conexão com a internet funcionando.

 

Use os pinos GPIO para acender alguns LEDs

 

O tamanho diminutivo da framboesa Pi significa que é ideal para criar seus próprios dispositivos embutidos. Esta pode ser uma ótima maneira de criar pequenos dispositivos de computação para resolver problemas específicos, como vimos com o controlador da câmera anteriormente.

 

 

figura 1: isso mostra como metade dos LEDs estão ligados. Os mais importantes são adicionados exatamente na metade

 

 

No entanto, existe o pequeno problema de que pode ser difícil saber o que está acontecendo dentro do seu Pi sem tela. Felizmente, os designers do Pi pensaram nesse problema e adicionaram as facilidades para obter informações sobre e fora de um Pi sem a maior parte dos periféricos de PC habituais. Isso é feito através de entrada e saída de uso geral (GPIO).

(Nota: se você tiver um Pi Zero, então o seu cabeçalho GPIO está despoblado - você precisará soldar isso em você).

Você pode ter se perguntado quais são os pinos espinhosos perto do leitor de cartão SD - bem, você está prestes a descobrir. Este circuito básico pode ser usado para exibir informações de qualquer fonte, mas aqui vamos usá-lo para resolver um problema que freqüentemente temos no techradar - encontrando o byte final do endereço IP.

 

Isso é útil se você quiser acessar seu Pi remotamente, mas não pode configurá-lo com um IP estático porque, por exemplo, você deve movê-lo entre as redes. Normalmente, você pode descobrir os três primeiros bytes da máscara de rede, mas o final pode ser evasivo a menos que você tenha um monitor.

Vamos usar o programa gpio, que é parte do WiringPi. Você pode descobrir mais sobre isso no siteWiringPi.

Ele vem como código fonte, então nós teremos que descompactá-lo e compilá-lo com:

$ tar xvf wiringPi.tgz 
$ cd wiringPi/wiringPi 
$ make 
$ sudo make install 
$ cd ../gpio 
$ make
$ sudo make install

Nós também usaremos bc, então instale-o com:

$ sudo apt-get install bc

Agora, basta o software - com o hardware! Apenas uma rápida palavra de aviso antes de começar - é possível quebrar o seu Pi conectando os fios errados em conjunto, então certifique-se de verificar novamente antes de ligar.

 

 

Figura 2: conecte a placa de pão a esses pinos. Utilizamos conectores de um único pino comercialmente disponíveis, mas você também pode soldar conectores ou usar um cabo IDE antigo

O circuito para isso é muito simples - você só precisa conectar cada saída à perna positiva de um LED, então a perna negativa do LED (mais curto) a um resistor de 1K Ohm e, finalmente, a outra perna do resistor ao comum Chão. Veja os números 1, 2 e 3 nesta página para obter detalhes. Uma vez que você tenha sua placa totalmente configurada conectada ao seu Pi, você pode fazer as coisas acontecerem.

Para começar, usaremos o pino final. Este é o pino 7 (o layout dos pinos não segue um padrão de numeração). Abra um terminal e configure-o para sair com:

 

$ gpio –g mode 7 out


Então você pode ativá-lo com: gpio-g escreve 7 1

E novamente com: gpio-g escreve 7 0

Se você é como nós, você fará isso repetidamente até a novidade desaparecer.

Uma vez que estiver, você está pronto para executar o script. Contém quatro partes. O primeiro apenas define os pinos no modo certo e garante que eles estejam desligados:

 

pins="7 8 25 24 23 18 15 14" 

for x in $pins 
do 
gpio -g mode $x out 
gpio -g write $x 0 
done


O segundo pega o endereço IP de ifconfig, converte-o em binário, depois pads com zeros avançados, se necessário.

 

ipaddress='ifconfig eth0 | grep 'inet ' | awk '{print $2}' | cut -f4 -d'.'' 
binary='echo "ibase=10;obase=2;$ipaddress" | bc' 
paddedBinary='printf %08d $binary'


A próxima parte usa o corte para extrair a parte que queremos dessa string binária e a emite para o pino apropriado.

bit=1 
for x in $pins 
do 
out='echo $paddedBinary | cut -b$bit' 
gpio -g write $x $out 
bit=$((bit+1))
done

E finalmente, dizemos ao script para dormir por cinco minutos, depois desligue os LEDs.

sleep 5m 
for x in $pins 
do 
gpio -g write $x 0 
done

Isso é tudo !

 

 

Figura 3: o circuito simples em toda a sua glória

Crie o script showIP.sh, faça com que ele seja executável com:

$ chmod a+x showIP.sh

Em seguida, digite sudo ./showIP.sh para exibir seu IP. Para que isso seja executado automaticamente na inicialização, basta adicionar esta linha ao rc.local:

 

 

/home/pi/showIP.sh &

Consulte a seção anterior do Controlador de Câmera para obter detalhes sobre como fazer isso.

 

Nós mostramos como enviar saída através do GPIO, mas, como o nome sugere, eles também podem receber entrada. Com isso, é ainda mais importante garantir que você não envie muita energia para os pinos.

Para obter entrada, basta configurar o modo para entrar com o modo gpio -g, então leia o valor com o gpio -g lido.

Este hardware pode exibir quaisquer oito bits de informação, portanto, você não precisa limitar a exibição de apenas endereços IP. Por exemplo, você pode fazer uma versão modificada do script anterior do controlador de câmera para usar os LEDs para indicar seu progresso. Você pode encontrar detalhes sobre a seleção completa de pinos GPIO aqui.

Os pinos que usamos são os mesmos nas revisões 1 e 2 do Raspberry Pi, mas alguns mudaram entre as duas versões. Se você projetar seus próprios circuitos, ou usar os da web, certifique-se de usar os pinos certos para o seu quadro.

Você não precisa se limitar a simplesmente ligar e desligar os pinos. O Pi suporta alguns métodos para passar quantidades maiores de dados através do GPIO. Os dois mais comuns são o barramento de interface periférico serial (SPI) e o circuito inter-integrado (I2C).

Há uma série de dispositivos disponíveis que usam estes, e muitas informações on-line para ajudar você a começar. Então, o que está parando você? Saia do seu ferro de solda e construa um exército robótico.

Gertboards and Arduinos

A conexão direta com os pinos GPIO do seu Pi pode fornecer controle básico de entrada e saída, mas há limitações. Existem dois itens adicionais que você pode obter para ajudá-lo a interagir mais precisamente com o mundo ao seu redor.

O Gertboard é um pacote de expansão bastante completo para conexão entre o seu Pi e o mundo real, incluindo um microcontrolador e uma variedade de opções de entrada e saída. Ele vem como um kit não montado, então você terá que colocar as mãos em um ferro de solda para juntar.

 

Enquanto isso, o Arduino é um microcontrolador que pode se conectar ao seu Pi (ou a qualquer outro computador) através da porta USB. Normalmente, ele vem montado, mas os formulários do kit também estão disponíveis. Em sua forma bruta, tem menos recursos do que o Gertboard (que inclui um microcontrolador Arduino), mas pode ser expandido com uma enorme variedade de escudos.

 

 

 

 

O Sentido HAT permite que seu Raspberry Pi detete o que está acontecendo no mundo ao seu redor

 

Verifique também o Sense HAT oficial, uma placa de conexão que fornece seus sensores Pi para monitorar o mundo exterior. Incluso vem com uma matriz LED para permitir que você exiba dados sem a necessidade de um monitor.

 

Por último, mas não menos importante, o RasWIK é um kit sem fio projetado especificamente para ensinar-lhe como criar sensores e atuadores sem fio - e muitos dos seus projetos incluídos nem exigem soldagem.

 

 

          DevOps Engineers – Full-time - 24635G   
AZ-Phoenix, We have an immediate full-time opportunity with one of our large clients in Phoenix, AZ area for a DevOps Engineers. Required skills:- Certification required - Oracle Java Developer Experience with AWS (preferred) or cloud-based systems. Deployment Automation (Puppet, Chef, or Salt). Java Scripting languages – Python preferred Unix/Linux Software development tool kits, such as Git and Jenkins. HTM
          Yocto Project Compilation by tingtexh   
Please read text, I need customize OS for toradex ARM Board: https://docs.google.com/document/d/1zZ5QzhRyw7LnNd3rbe64CfsAYsnbS1Df33Zy23CkT4M/edit?usp=sharing (Budget: $150 - $450 ARS, Jobs: Arduino, C++ Programming, Embedded Software, Linux, Raspberry Pi)
          Telegram 4.1 increases the supergroup member limit, adds new admin tools and Android Pay support for bot payments [APK Download]   

Telegram continues to be an awesome messaging platform. It's what I prefer to use to communicate, especially since the Android and desktop apps (even Linux) are spectacular. In a continuing roll of new features, Telegram 4.1 (or 1.1.9 on the desktop) is increasing the number of people allowed in a Supergroup, as well as adding new admin tools and Android Pay support for bot payments.

 

Supergroups are pretty self-explanatory. The big announcement here is that Telegram will now allow 10,000 members in each one.

Read More

Telegram 4.1 increases the supergroup member limit, adds new admin tools and Android Pay support for bot payments [APK Download] was written by the awesome team at Android Police.


          Difference between Fully Managed VPS Hosting & Unmanaged VPS Hosting (Martin Addam)   
Buy Linux Web-Hosting at affordable price with 99.9 uptime guaranteed multiple server locations available. Deals in: All types of Linux & Windows Web-hosting Servers, Domain, Domain Resellers & lots more serviceCan your organization afford to purchase Fully Managed VPS hosting to help you to spend more time in promoting as well as operating your organization
          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0