On Regularized Newton-type Algorithms and A Posteriori Error Estimates for Solving Ill-posed Inverse Problems   

Ill-posed inverse problems have wide applications in many fields such as oceanography, signal processing, machine learning, biomedical imaging, remote sensing, geophysics, and others. In this dissertation, we address the problem of solving unstable operator equations with iteratively regularized Newton-type algorithms. Important practical questions such as selection of regularization parameters, construction of generating (filtering) functions based on a priori information available for different models, algorithms for stopping rules and error estimates are investigated with equal attention given to theoretical study and numerical experiments.


          07/27/17: Defence of dissertation in the field of computer science, Muhammad Ammad-ud-din, M.Sc. (Tech.)   
Machine learning methods for improving drug response prediction in cancer

Muhammad Ammad-ud-din, M.Sc. (Tech.), will defend the dissertation "Machine learning methods for improving drug response prediction in cancer" on 27 July 2017 at 12 noon in Aalto University School of Science, lecture hall T2, Konemiehentie 2, Espoo.

Opponent: Professor Anil Korkut, The University of Texas MD Anderson Cancer Center, USA

Custos: Professor Samuel Kaski, Aalto University School of Science, Department of Computer Science


          Senior Software Engineer - Speech & NLU - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
How about Amazon Echo and Speech and Language Understanding ? Interested in Machine Learning?...
From Amazon.com - Fri, 09 Jun 2017 05:10:22 GMT - View all Toronto, ON jobs
          Machine learning for the masses: How one company achieved ROI in just one month   

Presented by IBM Nobel Prize-winning author Andre Gide once said, “Man cannot discover new oceans unless he has the courage to lose sight of the shore.” The typical organization has done much to empower its employees. Most progressive firms allow employees to drive basic decision-making, make financial and purchasing decisions, and freely allocate how they […]


          Google Photos is making sharing pictures with friends even easier   
TwitterFacebook

On Wednesday, Google announced several updates to the Photos app that will make sharing selfies, your trip to Machu Picchu, and that ridiculous sign you saw on the way to work even easier.

SEE ALSO: How to post Google Photos' awesome animations to Instagram

These features were first announced at Google's I/O conference in May. Now we have even more information about the updates. A new feature called “suggested sharing,“ for instance, uses machine learning to automatically suggest who to share photos with based on your habits. 

Image: google

The app will also proactively search for photos to share by recognizing events like weddings and pre-selecting images and people. That means less endless scrolling for what to share, or who to share with. You can share directly in the app or via email or phone. Read more...

More about Google, Photos, Sharing, Google Photos, and Tech
          Machine Learning Software Engineer - Intel - Toronto, ON   
In order to take advantage of the many opportunities that we see in the future for FPGA's, PSG is looking for engineers to join our teams....
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
          New AI Technology Learns How to Read Minds   

Scientists at Carnegie Mellon University have created a machine learning technology that utilizes brain activation patterns to identify complex thoughts and sentences, which is, in effect, an ability to "mind read."
          How artificial intelligence is taking on ransomware   
Twice in the space of six weeks, the world has faced major attacks of ransomware — malicious software that locks up photos and other files stored on your computer, then demands money to release them. Despite those risks, many people just aren’t good at updating security software. In the early days, identifying malicious programs such as viruses involved matching their code against a database of known malware. [...] a program that starts encrypting files without showing a progress bar on the screen could be flagged for surreptitious activity, said Fabian Wosar, chief technology officer at New Zealand security company Emsisoft. An even better approach identifies malware using observable characteristics usually associated with malicious intent — for instance, by quarantining a program disguised with a PDF icon to hide its true nature. For that, security researchers turn to machine learning, a form of artificial intelligence. The security system analyzes samples of good and bad software and figures out what combination of factors is likely to be present in malware. On the flip side, malware writers can obtain these security tools and tweak their code to see if they can evade detection. Some websites already offer to test software against leading security systems. Dmitri Alperovitch, co-founder and chief technology officer at Irvine vendor CrowdStrike, said that even if a particular system offers 99 percent protection, “it’s just a math problem of how many times you have to deviate your attack to get that 1 percent.” Though Cylance plans to release a consumer version in July, it says it’ll be a tough sell — at least until someone gets attacked personally or knows a friend or family member who has.
          Computer system predicts products of chemical reactions   
Machine learning approach could aid the design of industrial processes for drug manufacturing. When organic chemists identify a
          Google Improves Job Search with Listings from Many Websites   

如果想下载文章的MP3声音、PDF文稿、LRC同步字幕以及中文翻译等配套英语学习资料,请访问以下链接:
http://www.unsv.com/voanews/specialenglish/scripts/2017/06/21/8850/

Google has launched a new tool that lets users search for new job listings across many major career websites.

Beginning Tuesday, English-speaking job seekers in the United States will be able to use the new service. The new search tool was first announced at Google’s yearly developer’s conference in May.

Now, when people search for “jobs” or “jobs near me” on Google, the results will include listings from a number of websites. In the past, a Google job search only brought up general results from major jobs sites. With the new results, users can connect directly to the job descriptions that interest them.

Among the major job sites cooperating with Google are LinkedIn, Monster, Careerbuilder, Facebook, ZipRecruiter and Glassdoor.

The tool allows job seekers to narrow their search by category, job title or date posted. The new search service can also let users know how long it will take to drive to the new job.

As with many career websites, people can also request that alerts or emails be sent to inform them of new jobs of interest.

Google says the new system helps job seekers by putting many different listings in one place. Users no longer have to go onto multiple websites to search.

An announcement on the company’s website said the system is powered by Google’s machine learning technology. This makes the results more relevant and provides suggestions to users of other possible jobs of interest, the company said. The search is also supposed to prevent duplicate or old listings from coming up.

The new Google search tool lets users connect directly to job descriptions that interest them. (GOOGLE) The new Google search tool lets users connect directly to job descriptions that interest them. (GOOGLE)

Google plans to keep adding new job sites to the system over time. The company said it has encouraged all job providers to make openings available through the new Google career search.

The system is available for people using a desktop or mobile device.

The new service is Google’s latest attempt to keep users on the search engine while they seek various products. The company already expanded its search capabilities related to travel, ordering food and connecting users to other local services.

I’m Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on a report from the Associated Press and other sources. Hai Do was the editor.

We want to hear from you. Write to us in the Comments section, and visit our Facebook page.

Words in This Story

categoryn. type or kind

alert n. message of notification

relevantadj. important, related in an appropriate way

duplicate adj. exactly the same as something else

encouragev. tell or advise to do something

capability n. the ability to do something


          RPP #154: Maplesoft Möbius - Interview with Jim Cooper   
  • Interview with Jim Cooper, CEO of Maplesoft, about Möbius their new comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics. We discuss:
    • Maplesoft history
      Jim Cooper, CEO
    • Maplesoft course/module marketplace
    • Möbius platform and toolkit
    • LMS integration
    • Adaptive and customized learning
    • Analytics to improve learning
    • AI / Machine Learning / Deep Learning
    • Building an AI tutor
    • Pricing models
    • Podsafe music selection
    Duration: 36:37

              Google presenta Allo, el asistente personal en español   

    Google presenta Allo, el asistente personal en español

    Google presenta Allo, el asistente personal en españolEl buscador Google ha anunciado el lanzamiento de Google Allo, el asistente personal para su mercado español. El homólogo de Siri en Apple se presenta como un asistente virtual que conversa contigo, entiende tu mundo y te ayuda a resolver tus tareas diarias gracias a los avances en inteligencia artificial y del "machine learning".

    Así lo ha anunciado en el blog oficial de Google España:

    "En mayo pasado anunciamos la llegada del Asistente de Google, un asistente virt...

              Artificial intelligence/Machine learning   
    • Is your AI being handed to you by Google? Try Apache open source – Amazon's AWS did

      Surprisingly, the MXNet Machine Learning project was this month accepted by the Apache Software Foundation as an open-source project.

      What's surprising about the announcement isn't so much that the ASF is accepting this face in the crowd to its ranks – it's hard to turn around in the software world these days without tripping over ML tools – but rather that MXNet developers, most of whom are from Amazon, believe ASF is relevant.

    • Current Trends in Tools for Large-Scale Machine Learning

      During the past decade, enterprises have begun using machine learning (ML) to collect and analyze large amounts of data to obtain a competitive advantage. Now some are looking to go even deeper – using a subset of machine learning techniques called deep learning (DL), they are seeking to delve into the more esoteric properties hidden in the data. The goal is to create predictive applications for such areas as fraud detection, demand forecasting, click prediction, and other data-intensive analyses.

    • Your IDE won't change, but YOU will: HELLO! Machine learning

      Machine learning has become a buzzword. A branch of Artificial Intelligence, it adds marketing sparkle to everything from intrusion detection tools to business analytics. What is it, exactly, and how can you code it?

    • Artificial intelligence: Understanding how machines learn

      Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

    • Your future boss? An employee-interrogating bot – it's an open-source gift from Dropbox

      Dropbox has released the code for the chatbot it uses to question employees about interactions with corporate systems, in the hope that it can help other organizations automate security processes and improve employee awareness of security concerns.

      "One of the hardest, most time-consuming parts of security monitoring is manually reaching out to employees to confirm their actions," said Alex Bertsch, formerly a Dropbox intern and now a teaching assistant at Brown University, in a blog post. "Despite already spending a significant amount of time on reach-outs, there were still alerts that we didn't have time to follow up on."


              TensorFlow 1.0 Coverage   

              Open source machine learning tools as good as humans in detecting cancer cases   
    • Open source machine learning tools as good as humans in detecting cancer cases

      Machine learning has come of age in public health reporting according to researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis. They have found that existing algorithms and open source machine learning tools were as good as, or better than, human reviewers in detecting cancer cases using data from free-text pathology reports. The computerized approach was also faster and less resource intensive in comparison to human counterparts.

    • Machine learning can help detect presence of cancer, improve public health reporting

      To support public health reporting, the use of computers and machine learning can better help with access to unstructured clinical data--including in cancer case detection, according to a recent study.


              TensorFlow/Google: Latest Moves   

              FOSS and Artificial Intelligence   

              TensorFlow   

              Machine Learning Engineer   
    TX-Dallas, Mastech Digital provides digital and mainstream technology staff as well as Digital Transformation Services for leading American Corporations. We are currently seeking a Machine Learning Engineer for our client in the IT-Services domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract position and the client is l
              Google Cloud Platform : Good Times Ahead   

    The tech behemoths Amazon, Microsoft & Google are established players in one of the battes that will change the future of customers view and investments of computing. This is an area with a potential hundred billion dollars plus that can be secured for the vendors – a lucrative space that each one wants to corner : the cloud. For a quick recap of the dollars under consideration - read here. Cloud lets business tap on demand the processing, storage and software over the web. Tall, powerful and cool servers with gargantuan memory onboard installed inside enterprises now are gradually giving way to tap on need model –use when needed and shut them down in other times. The vast data centers and governance brought in by these tech behemoths make them ideal partners for business to tap such services on a need to have basis – whenever and wherever required.

    Amazon is by far the well-established leader here with a revenue that far exceeds the combined revenue of all competitors put together. Amazon partly achieved this by bringing a single-minded focus to this space to scale up and win and it paid back handsomely. Amazon’s first-mover advantage coupled with slow reactions from competition has now made Amazon an almost insurmountable lead in business in this space. That’s the focus and attention of the next two players. Microsoft has been pushing Azure extensively last 2-3 years and clocking impressive success. The lesser known of the trio in this space – Google is now flexing its muscle and is now focused on striking it big here. Google is recognized primarily as the early proponent of cloud computing – after all, Google built huge date centers in its early days and ran services like search, gmail and maps , available around the world and with unthinkable scale in action. Alongside, developers build other applications on top resulting in an expanding google universe. Google has the strong reputation of running scalable, secure services and recognized as one delivering successfully for a long time. By being late and remaining indifferent to this space, Google lost out on big time opportunities that were out there and Amazon happily grabbed these. Now, Google wants to get back aggressively and be counted as large player in this space and is increasing its investments, market messaging and outreach efforts. Last week the company hosted an event to talk about their upcoming plans around the Google cloud platform and talk loudly about some notable success that they have notched thus far. I listened to the webcast and followed the announcements keenly to see how Google is planning to move things here and I heard good actionable things.

    Google’s overall messaging shows that the momentum in the business continues and the focus to scale this platform with enterprise as a key segment to focus on for adoption. Google positioning is getting better to take a sizable chunk of business in the ever growing public cloud space over the next few years. The overall market is projected to have 50% of the enterprise workloads moved into overtime. Google paraded customers/customer stories – the likes of Spotify, Coca-Cola, Disney etc. as proof of successful adoption of their services.

    The emphasis on a move forward basis is positioned around:

    A. Machine learning as a cornerstone of their approach and hence drive the attendant benefits for customers.

    B. Monetization/Commercialization of native security tools used within Google to make these available to customers.

    C. Make ease of deployment and migration more easy.

    In terms of upcoming innovation, consistent with its focus on enterprise adoption, Google talked about the lofty vision of No-ops goal for enterprises. This will be an ideal demonstration of the power of cloud computing and if Google and others are able to make it happen everywhere, it’s a true sign of the changed paradigm here.Another important facet of the evolution of the cloud revolved around the extreme emphasis on machine learning and the Google cloud platform’s leverage of this. A new product called Tensorflow has been open sourced by Google and it is their core belief that embracing machine learning will become a non-negotiable for innovative startups focused on scaling globally and offering sophisticated services.Add into the mix cloud monitoring tools working across clouds – enterprises can hardly resist massive cloud adoption with these. And in order to keep helping enterprises adopt and scale faster, Google wants to focus on the three important aspects of cloud computing – data centers, security and containers.

    I drilled into these a little more to find what could be differentiated in offering such services and what I could recollect from the conference webcast included the following, which Google finds as the drivers for increasing enterprise adoption of their services.

    1. Better value: GCP can cost up to 50% less than competitors. Google provides automatic discounts as customers consume higher volumes. And GCP also offers custom machine types (i.e., cores or gigabytes), which helps save customers versus static instance types from other vendors that often lead to overprovisioning.

    2. Accelarate innovation: Google’s approach here is to allow customers to run applications with no additional operations staff needed. For example, Google showcases Snapchat here - grew from zero to 100mn users without hiring an operations team (just two people).

    3. Risk Management : Google shall be focused on providing best-in-class security for customer’s data and digital assets, protect privacy and help in conforming to compliance and regulatory needs.

    4. Open Source adoption –leading to better management by customers for products like Kubernotes( focused on managing data containers).

    I got the feeling that the GCP was comparable to Amazon’s fabled AWS services for the purpose of enterprise adoption. While the engineering and under the hoods battle is one part of the equation, the real determinant of success and also ran lay in shaping market forces – GTM, Solutions, Partnerships, support and ease of doing business – an area that Google will have to heavily focus on. With Google’s stated plans to triple their data center regions and with some good early demonstrated success, the market should begin to warm up for Google.Enterprise success depends not just on what’s available from a service provider – determination of what and how to move to cloud, transforming IT landscape, flipping over the governance model and change management – all these have a say in the eventual success of any cloud initiative. With substantial progress and focus on this space with the tech giants competing aggressively for their pie in this fast growing space, the competition expands the market, services get more sophisticated and yet mature fast and the industry improves and courtesy of Moore’s effect, the customers get superior services at a lower cost. It’s a win-win situation for all.



              Machine Learning on Heroku with PredictionIO   
    Last week at the TrailheaDX Salesforce Dev Conference we launched the DreamHouse sample application to showcase the Salesforce App Cloud and numerous possible integrations. I built an integration with the open source PredictionIO Machine Learning framework. The use case for ML in DreamHouse is a real estate recommendation engine that learns based on users with ... Read more
              Sr Software Engineer ( Big Data, NoSQL, distributed systems ) - Stride Search - Los Altos, CA   
    Experience with text search platforms, machine learning platforms. Mastery over Linux system internals, ability to troubleshoot performance problems using tools...
    From Stride Search - Tue, 04 Apr 2017 06:25:16 GMT - View all Los Altos, CA jobs
              Senior Software Engineer - Speech & NLU - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
    How about Amazon Echo and Speech and Language Understanding ? Interested in Machine Learning?...
    From Amazon.com - Fri, 09 Jun 2017 05:10:22 GMT - View all Toronto, ON jobs
              The Machine Learning Imperative   
    Machine learning is here, and it's finally mature enough to cause a major seismic shift in virtually every industry.
              Un estudio determina que Twitter sirve para predecir revueltas con gran precisión    

    Cell Phone 1245663 1280

    Tal y como se ha publicado en The Verge, un estudio de la Universidad de Cardiff ha determinado que Twitter puede usarse para prevenir revueltas ciudadanas hasta una hora más rápido de lo que lo que lo harían los cuerpos de seguridad. Anteriormente ya se había hablado de Internet para prevenir situaciones de riesgo, aunque se trataba de YouTube y el ISIS.

    Para ello se usa una base de datos de 1,6 millones de tuits que datan de las revueltas de Londres en 2011. Gracias a ellos han logrado desarrollar una serie de algoritmos de machine learning que pueden identificar amenazas en Twitter de forma automática. Para ello se tienen en cuenta cosas como la localización del tuit, la frecuencia de inclusión de ciertas palabras y el espaciado entre tuits.

    Aplicando estos algoritmos lograron, según el estudio, ser más rápidos que la policía en prácticamente cualquier situación. Tal y como se señala en el finforme, las distintas aproximaciones actuales para detectar eventos que puedan romper el orden público se dirigen a asuntos a gran escala, como ataques terroristas.

    Con este método se puede advertir de asuntos a escala mucho más pequeña, como incendios o accidentes de tráfico. Gracias a las redes sociales se puede estrechar este hueco, que también se podría aplicar a eventos a gran escala sin ningún tipo de problema.

    Este método podría ser usado por la policía para predecir atentados terroristas, y también evitar revueltas ciudadanas antes de que sucedan. Los investigadores comentaban que su sistema puede funcionar igual de bien que los agentes humanos. Y a tenor de los resultados, dicen que incluso mejor.

    El informe de la Universidad de Cardiff confirma lo que empresas como Dataminr llevan tiempo haciendo para gobiernos y fuerzas de seguridad: agregar lo que el público dice en redes sociales y convertirlo en alertas y avisos sobre eventos de alto impacto social y político.

    Vía | The Verge
    En Genbeta | No, lo de que te pidan el Facebook al entrar a Estados Unidos no es nuevo

    También te recomendamos

    Tenemos 2 inscripciones para que participes en la I Travesía Playas de la Azohía en Cartagena, ¡no te la pierdas!

    Data Selfie examina tu uso de Facebook y te dice todo lo que la red social sabe sobre ti

    Esta herramienta puede predecir tu perfil psicográfico según lo que haces en Twitter y Facebook

    -
    La noticia Un estudio determina que Twitter sirve para predecir revueltas con gran precisión fue publicada originalmente en Genbeta por Sergio Agudo .


              Future of Ariba Network on display at SAP Ariba Live   
    SAP Ariba Live shows enhancements to procurement software and looks at the future of the SAP Ariba Network, including machine learning, AI, bots and blockchain.
              What’s New in the Xen Project Hypervisor 4.9?   

    The Xen Project Hypervisor 4.9 release focuses on advanced features for embedded, automotive and native-cloud-computing use cases, enhanced boot configurations for more portability across different hardware platforms, the addition of new x86 instructions to hasten machine learning computing, and improvements to existing functionality related to the ARM® architecture, device model operation hypercall, and more.

    We are also pleased to announce that Julien Grall, Senior Software Engineer at ARM, will stay release manager for Xen Project Hypervisor 4.10 release.


              Comment on Boosting and AdaBoost for Machine Learning by Jason Brownlee   
    Glad to hear it.
              Java Developer (Job #6358)   
    Previous experience working in software engineering and large-scale implementations of statistical methods to build decision support or recommender systems will enable you for this role. You will need to be innovative and entrepreneurial to work within a start-up like environment.

    Responsibilities

    Work with large and complex data sets to solve difficult and non-routine problems
    Develop analytic models and work closely with cross-functional teams and people to integrate solutions
    Drive the collection of data and refinement from multiple high-volume data sources
    Research methods to improve statistical inferences of variables across models


    Job Qualifications

    3+ years of relevant work experience
    Experience programming with Java

    Experience working with machine learning and distributed computing tools like Hadoop is preferred
    Excellent interpersonal and communication skills
    Excellent debugging and testing skills, and likes to quickly learn new technologies
    BS in Computer Science, Statistics or equivalent practical experience
    Large scale systems (billons of records) design and development with knowledge of UNIX/Linux
    Strong sense of passion, teamwork and responsibility
              Quantitative Manager (Job #6454)   
    The successful candidate will be a creative, resourceful and experienced with Agile methods and techniques to implement Scrum; and must be a self-starter and have strong background in statistics, machine learning and big data including information retrieval, natural language processing, algorithm analysis, and real-time distributed computing methods. As a quantitative manager you have personnel management responsibility and can exercise your talents to lead teams of expert data scientists and engineers on multiple assignments and projects in a disciplined and fast paced environment. You must be confident to tackle complex engineering problems, and will be expected to work on design algorithms and codify large-scale statistical models for real time processing based on our analytics architecture.

    Advanced Analytics
    The Advanced Analytics service area is comprised of professionals that possess competency and experience in the areas of risk management, business and operational targeting processes, computational linguistics, machine learning, knowledge discovery, semantic engineering, probabilistic and statistical data mining. Advanced Analytics professionals use these skills to assess, analyze, and improve the effectiveness and efficiency of targeting methods, operational control processes, offer recommendations to improve operations, and assist clients with enterprise risk and compliance activities.

    Requirements
    Minimum Qualifications
    • Excellent communication skills and ability to understand and communicate business requirements
    • Excellent analytical and problem-solving skills
    • Strong programming skills and experience in SPSS, SAS, R, Matlab and similar toolset and deep understanding of exploratory data analysis
    • Background in statistical techniques, NLP and machine learning, predictive modeling, data mining, statistical inference and classification algorithms
    • Develop statistical models and analytical methods to predict, quantify, and forecast multiple business concerns and provide performance reporting capabilities
    • Experience in modeling techniques, statistical analysis, propensity score matching, multivariate analysis, logistic regression, time series, survival analysis, decision trees, and neural networks
    • BA/BS in Statistics, Mathematics, CS or related technical field, and MS or PhD preferred
    • Strong sense of passion, teamwork and responsibility
    • Willingness to travel and flexibility to commute to clients in the Washington D.C. metro area as needed
              Data Scientist (Job #6243)   
    The Advanced Analytics group is all about focusing on our client?s mission. Individuals in this role are expected to work as a software developer and a quantitative researcher. You will be responsible to implement and promote data driven decision support systems, and create high-impact analytics solutions for our clients. In this role you work on a small team and can switch assignments and projects in a disciplined and fast paced environment. You must be confident to tackle complex problems, and will be expected to work on design algorithms and codify large-scale statistical models for real time processing based on the analytics architecture.

    Previous experience working in software engineering and large-scale implementations of statistical methods to build decision support or recommender systems will enable you for this role. You will need to be innovative and entrepreneurial to work within a start-up like environment.

    Responsibilities

    Work with large and complex data sets to solve difficult and non-routine problems
    Develop analytic models and work closely with cross-functional teams and people to integrate solutions
    Drive the collection of data and refinement from multiple high-volume data sources
    Research methods to improve statistical inferences of variables across models


    Job Qualifications

    3+ years of relevant work experience
    Experience programming with Java, Python and SQL application development
    Experience of relational databases and ETL data processing with thorough understanding of data structures, algorithms and design best practices
    Experience analyzing complex, high-dimensionality data to perform text mining, NLP techniques and implement information retrieval or recommender systems
    Experience with statistical computing tools like R, SAS, and SPSS
    Experience working with machine learning and distributed computing tools like Hadoop is preferred
    Excellent interpersonal and communication skills
    Excellent debugging and testing skills, and likes to quickly learn new technologies
    BS in Computer Science, Statistics or equivalent practical experience
    Large scale systems (billons of records) design and development with knowledge of UNIX/Linux
    Strong sense of passion, teamwork and responsibility
              Researchers Think They Can Use Twitter To Spot Riots Before Police   
    Researchers in the UK used machine learning algorithms to analyse 1.6 million tweets in London during the infamous 2011 riots, which resulted in widespread looting, property destruction and over 3,000 arrests. According to the researchers, analysing Twitter data to map out where violence occurred in London boroughs was faster and more accurate than relying on emergency calls -- or even on-the-ground information gathering. More »
       
     
     

              Google Photos is making sharing pictures with friends even easier   
    On Wednesday, Google announced several updates to the Photos app that will make sharing selfies, your trip to Machu Picchu, and that ridiculous sign you saw on the way to work even easier.

    SEE ALSO: How to post Google Photos' awesome animations to Instagram

    These features were first announced at Google's I/O conference in May. Now we have even more information about the updates. A new feature called “suggested sharing,“ for instance, uses machine learning to automatically suggest who to share photos with based on your habits. 

    Image: google

    The app will also proactively search for photos to share by recognizing events like weddings and pre-selecting images and people. That means less endless scrolling for what to share, or who to share with. You can share directly in the app or via email or phone. Read more...

    More about Google, Photos, Sharing, Google Photos, and Tech Reported by Mashable 46 minutes ago.
              Software Engineer - Computer Vision/Machine Learning Expert - Uber - Boulder, CO   
    About the TeamUber, Advanced Technologies, Engineering - Imagery is the Louisville, CO division of the Uber Engineering Team:....
    From Uber - Sat, 22 Apr 2017 14:05:27 GMT - View all Boulder, CO jobs
              Bioinformatics Specialist-Metagenomics/Proteomics - Signature Science, LLC - Austin, TX   
    Travel to project and business development meetings as needed. Familiarity with machine learning, Git, and agile software development is a plus;... $90,000 a year
    From Signature Science, LLC - Tue, 06 Jun 2017 09:05:50 GMT - View all Austin, TX jobs
              Google Photos shared libraries feature is rolling out now   
    In mid-May, Google announced a new inbound feature for Google Photos called Suggested Sharing; with it, users are presented with sharing suggestions made possible via machine learning. That feature is rolling out to users this week, the company has announced, making Google Photos even easier to use; the shared libraries feature is rolling out, too. Once it arrives on your … Continue reading
              Tech Benefits Must be Shared   
    The UK needs to make sure that the dividends of machine learning are shared with everyone in society according to the country’s top researchers in the field.
              Comment on Fei-fei Li in Google Cloud NEXT ’17: Annoucing Google Could Video Intelligence API, and more Cloud Machine Learning Updates by loveurownlife   
    SUPERB
              Business Continuity / Disaster Recovery Architect - Neiman Marcus - Dallas, TX   
    Red Hat Enterprise Linux, AIX. Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a...
    From Neiman Marcus - Thu, 25 May 2017 22:30:52 GMT - View all Dallas, TX jobs
              PROPRIETARY SYSTEMS TRADER   

    PROPRIETARY TRADING SYSTEMS – ARTIFICIAL INTELLIGENCE – MACHINE LEARNING – ALGORITHMIC TRADING Source Systems is now hiring for immediate start on a number of commercial Stock Market & Forex projects. We provide motivated and dedicated individuals the opportunity to trade the FOREX, NASDAQ, and NYSE markets on a proprietary platform in a professional environment with […]

    The post PROPRIETARY SYSTEMS TRADER appeared first on Los Angeles Job Board - jobs, employment, free job posting.


              Core ML : un framework d'Apple pour intégrer plus facilement des modèles de machine learning dans vos applications iOS, watchOS, macOS et tvOS   
    Core ML : un framework d'Apple pour intégrer plus facilement des modèles de machine learning dans vos applications
    iOS, watchOS, macOS et tvOS

    Pendant qu'Apple dévoilait ses nouveautés lors de sa conférence développeur WWDC 2017, la firme de Cupertino a annoncé dans la foulée un nouveau framework de machine learning (ML) baptisé Core ML. Core ML permet aux développeurs d'intégrer de manière simple et facile des modèles de machine learning entrainés dans leurs applications. Il cible plus précisément...
              Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage   
    Hedge funds are testing new quantitative strategies that could supplant traditional fund managers Photo: Quantopian At Boston-based financial startup Quantopian, a team of analysts and engineers create tools that allow anyone to write investment algorithms...
              A Cautionary Tale for Healthcare   
    During my CIO career, I’ve worked on a few Harvard Business School case studies and I’ve had the “joy” of presenting my failures to Harvard Business school students for over a decade.

    I enjoy telling stories and inevitably the cases I teach are about turning lemons into lemonade.

    In this post, I’d like to tell a story about a recent experience with Marvin Windows and lessons learned that apply to healthcare.   I know that sounds odd, but hear me out.

    At Unity Farm and Sanctuary I’m the proud owner of about 100 Marvin windows from the 1990’s.   All are still functional but incorporate nylon parts that eventually dissolve in sunlight.   I needed to replace the nylon pins that hold the screens in place.

    I did what anyone would do.   I searched the internet for Marvin Top Rail Screen Pin, expecting to find the parts available on Amazon or a Marvin website.   No such luck.   Plenty of “plunger pins” but no top rail pins.  I did find an unindexed PDF of a Marvin catalog .   On page 43, I found "Top Rail Screen Pin M120 11867852”.   I had a part number so ordering it should be easy, right?

    I went to the Marvin website looking for a part lookup function, an ordering function, or a customer service app.   No such luck.  I did find a corporate 1-800 number on the Contact Us page.

    After calling that number I was redirected to  the web page of a distributor, since Marvin Windows will not sell anything to anyone directly.

    Two days after emailing the distributor, I received an email back from a very kind and helpful person explaining that I had checked the wrong box on the distributor’s webpage - the part number I was asking about is from the Marvin product line and I had checked the Marvin Integrity product line.

    I explained the part number is the part number and I have no idea what product lines Marvin offers.

    She noted that the part was available but the distributor sells nothing to no one directly.    I will have to find a local retailer and begin the entire process again.    She was incredibly service oriented and when I asked, she agreed to find a retailer for me and tell them what part number to order.

    Two days later, I received an email from a retailer 50 miles away noting that they could order the parts for me.   I asked the cost and they said .25 each.    Given that the process of getting a window part from Marvin is highly convoluted, I ordered 50 - a lifetime supply for the grand sum $12.50.   I asked when the parts would arrive.    The answer is unknowable since the retailer contacts the distributor who contacts the manufacturer and none of the above have customer accessible supply chain tracking or logistics information systems.

    Two weeks later I emailed again and was told the parts would arrive 50 miles away in another week.

    A week passed and I received a call from an incredibly service oriented person at the retailer who told me my parts had arrived (they weigh one ounce and fit in a standard letter sized envelope).   I asked if she could mail them to me and she responded that Marvin retailers cannot mail anything to anyone.    They tried it once a few years ago and since they don’t know how much postage it costs to mail a one ounce letter, the package would likely be returned undeliverable after a few weeks.   Best not to risk using shipping services and instead, drive 1.5 hours to pick them up, sometime 8a-5p Monday through Friday.

    Of course, I have a day job so that would mean taking time off work.    I arranged to do conference calls during the 1.5 hour drive.    I received my one ounce of parts for $12.50 one month after my search began.   They fit my window screens perfectly.  Victory!

    One the same day I picked up the screen parts, I needed a very obscure electrical wall plate to cover an old electrical box with a deactivated switch.    I needed half decora/half blank.   I could not imagine such a part was ever made.    30 seconds after searching Amazon, I found it, clicked once and 12 hours later found it on my doorstep without lifting a finger (or paying shipping).

    The purpose of telling you this story is that Marvin Windows senior leadership (and the Board) must be using  Cobol-based mainframes to manage the company when they're not taking calls on their flip phones.   It’s clear they’ve been asleep since 1985.    When it comes time to replace the windows in my buildings,  I would never consider Marvin Windows products, since it clear they care more about preserving an ancient business model and less about their customers modern expectations and experiences.    Such companies will wither and be replaced by an “Uber equivalent” for windows.

    But wait, I’m living in the glass house of healthcare and throwing stones.   Just how easy is it to make an appointment with your doctor, seek real time telemedicine/telehealth advice, or get access to a “care traffic control” logistics application that shows your progress against a care plan?  In 2017, healthcare is still largely following the Marvin Windows approach of phone, fax, email, smoke signals and morse code.

    The lesson learned is that in the near future, healthcare organizations that offer an Amazon approach will displace this which do not.   That’s why BIDMC has focused on 5 pillars to guide IT projects in 2016 and 2017 - social networking communication tools, mobile enablement, care management analytics and cloud services.   Every month we’re launching new functionality that gets us closer to the Amazon experience with such apps as BIDMC@Home (internet of things/telemedicine), OpenNotes, and Alexa ambient listening services.   In two weeks, our entire dataset will be moved to the Amazon data lake with appropriate privacy agreements and security protections so we can take advantage of cloud hosted machine learning and image recognition services.     By 2018, we’ll be much less Marvin Windows and much more Amazon.

    I do not know the window business and maybe there is something about it that supports old business models while the rest of the world  innovates.   However, I do know healthcare, and we need to embrace the same kind of consumer echnology focus as every other industry.   If we don’t, our bricks and mortar buildings will go the way of Sears, JC Penney, and Macy’s  .


              Dispatch from HIMSS 2017   
    As I wrote last week, I expected 2017 HIMSS to be filled with Wearables, Big Data, Social Networking concepts from other industries, Telemedicine, and Artificial Intelligence.

    I was not disappointed.   42,000 of my closest friends each walked an average of 5 miles per day through the Orlando Convention Center.  One journalist told me “It’s overwhelming.  You do your best to look professional and wear comfy shoes!”

    After 50 meetings, and 12 meals in 3 days, here’s my impression of the experience

    1.  Wearables, while still relevant have gone from the peak of the hype curve to the trough of disillusionment.   Google Glass, smartwatches, and innovative fitness trackers have not quite achieved their promised potential in healthcare and no one is quite sure how to integrate their data into the workflow.    That being said, Internet of Things is bigger than in previous years, with home scales, glucometers, and blood pressure cuffs becoming more connected than ever before.  Middleware like Apple Healthkit has significantly reduced the interfacing burden.

    2.  Big Data has morphed into Care Management and Population Health.   We’re at a point in history when healthcare data has become digital but few are sure how to turn that data into wisdom.   Decision support services that analyze problem lists, medications and genomic data, producing customized care plans are emerging.  The challenge is connecting them to the EHR workflow.   The Argonaut work group met for a few hours to decide on the next interoperability capabilities for FHIR and chose scheduling workflow and clinical decision support integration.  This means that any third party developer will be able to integrate their analytic functionality into EHR workflow, generating alerts and reminders and scheduling services (appointments, surgery, infusions/therapy, referrals, and even post acute care) with limited effort and cost.      To me the most important theme at HIMSS 2017 was that FHIR/APIs, cloud hosted services, and EHRs will come together in 2018 similar to the way the iPhone spawned the app store.   Assume every EHR company will have a curated app store and sandbox for developer education/pilot testing within a year.

    3.  Value-based purchasing has generated an interest in customer relationship management - the patient as partner and consumer.  As reimbursement moves from fee for service to quality/outcomes driven risk contracts, incentives are aligned to provide wellness services, “care traffic control”, and loop closure.    EHRs are not optimized for these functions, so third parties are offering cloud-hosted customer relationship management for healthcare.

    4.  Telemedicine and Telehealth continues to grow as efforts to reduce total medical expense move care from downtown academic tertiary referral facilities to lower cost, more convenient alternatives in the home.  Telemedicine means many things and ranges from on demand virtual urgent care visits to store/forward second opinions to expert staff augmentation from a distance.   Products are evolving that enable telemedicine record keeping, billing, and mobile device secure communications.

    5.  Artificial Intelligence/Machine learning is the new “plastics”   .   There is no question that AI is the peak of the hype curve this year.  We need to be measured about our expectations for this technology.   Computers do not “think”,  they use pattern matching to focus the attention of humans, separating signal from noise.    There are great use cases for machine learning - automating sorting of paper medical records for scanning by predicting metadata, scrubbing personal identifiers from unstructured data, and suggesting reasonable ICD10 codes for episodes of care.    It’s not likely that an AI system is going to read the Merck Manual tonight and replace your doctor tomorrow.

    Although my voice is nearly gone, I’m leaving HIMSS with optimism for the industry.   EHR vendors will increasingly share data with each other and with third party developers.  Usability will improve as new applications and analytics reduce clinician burden.   Patients will increasingly be equal members of the care team providing objective and subjective data from devices and mobile apps in their home.  

    As I’ve said before, I believe the next phase of history belongs to the private sector, so for all of the developers, customers, and experts at HIMSS, it’s all up to you.
              Preparing for HIMSS 2017   
    Next week, 50000 of our closest friends will gather together in Orlando to learn about the latest trends in the healthcare IT industry.

    What can we expect?

    I’ll be giving a few keynote addresses, trying to predict what the Trump administration will bring, identify those technologies that will move from hype to reality, and highlighting which products are only “compiled” in Powerpoint - a powerful development language that is really easy to modify!

    Here are a few themes

    1.  The Trump administration is likely to reduce regulatory burden but is unlikely to radically change the course of value-based purchasing.    This means that interoperability, analytics, and workflow products that help improve outcomes while reducing costs will still be important.   Fee for service medicine will diminish over time, so focusing on quality healthcare will be more important than increasing the quantity of tests, procedures, and visits.   Novel products and services will be needed since the existing EHR is not designed for optimizing wellness, it’s designed for documenting/billing encounters.

    2.  Precision Medicine that tailors care plans and therapeutics based on the unique characteristics of each individual will continue to be important.    Although there is much discussion of genomic medicine but even simple innovations can make an impact.  For example, my wife needs to take 3.75mg of Methimazole every day but the medication is packaged as a 5mg tablet she needs to cut into quarters.  Why not offer a 3D printer that simply “prints” the tablets you need each day?

    3.  Care Management solutions that treat the patient as customer will continue to be important.   Sharing care plans, monitoring progress on those plans, and engaging patients/families as a shared decision maker will require innovation.

    4.  Artificial intelligence/machine learning will be at the peak of the hype curve this year.   IBM Watson will not replace clinicians, but the notion of using software for pattern matching does work well.

    5.  Internet of things, patient generated healthcare data, and telemedicine/telehealth will be increasingly important tools as we strive to reduce total medical expense, address the needs of an aging society, and enable our clinicians to practice at the top of their license.  

    I’ll be running from venue to venue Sunday-Wednesday.   See you there.
              Dissertation Defense: Improved Multi-Task Learning Based on Local Rademacher Complexity Analysis   
    Announcing the Final Examination of Niloofar Yousefi for the degree of Doctor of Philosophy

    When faced with learning a set of inter-related tasks from a limited amount of data, learning each task independently may lead to poor generalization performance. Multi-Task Learning (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. Although MTL has been actively investigated by the machine learning community, there are only a few studies examining the theoretical justification of this learning framework. These studies provide learning guarantees in the form of generalization error bounds which are considered as important problems in machine learning and statistical learning theory. This importance is twofold: (1) generalization bounds provide an upper-tail confidence interval for the true risk of a learning algorithm the latter of which cannot be precisely calculated due to its dependency to some unknown distribution P from which the data are drawn, (2) this type of bounds can also be employed as model selection tools, which lead to identifying more accurate learning models.

    The generalization error bounds are typically expressed in terms of the empirical risk of the learning hypothesis along with a complexity measure of that hypothesis. Although different complexity measures can be used in deriving error bounds, Rademacher complexity has received considerable attention in recent years, as these complexity measures can potentially lead to tighter error bounds compared to the ones obtained by other complexity measures. However, one shortcoming of the general notion of Rademacher complexity is that it provides a global complexity estimate of the learning hypothesis space, which does not take into consideration the fact that learning algorithms, by design, pick functions belonging to a more favorable subset of this space, and they therefore yield better performing models than the worst case. To overcome the limitation of global Rademacher complexity, a more efficient notion of Rademacher complexity, the so-called local Rademacher complexity, has been considered, which leads to sharper learning bounds, and as such, compared to its global counterpart, guarantees a faster rate of convergence. Also, considering the fact that local bounds are expected to be tighter than the global ones, they can motivate better (more accurate) model selection algorithms.

    While the previous MTL studies provide generalization bounds based on some other complexity measures, in this dissertation, we derive generalization error bounds for some popular kernel-based MTL hypothesis spaces based on the Local Rademacher Complexity (LRC) of those hypotheses. We show that these local bounds have faster convergence rate compared to the previous Global Rademacher Complexity (GRC)-based bounds. We then use our LRC-based MTL bounds to design a new kernel-based MTL model which benefits from strong learning guarantees. An optimization algorithm will be proposed to solve our new MTL problem. Finally, we run simulations on experimental data that compare our MTL model to some classical Multi-Task Multiple Kernel Learning (MT-MKL) models designed based on the GRCs. Since the local Rademacher complexities are expected to be tighter that the global ones, our new model is also expected to show better performance compared to the GRC-based models.

    Committee in Charge: Mansooreh Mollaghasemi (Chair), Michael Georgiopoulos, Luis Rabelo, Qipeng Phil Zheng, Georgios Anagnostopoulos, Petros Xanthopoulos

              Dissertation Defense: Data Representation in Machine Learning Methods with its Application to Compilation Optimization and Epitope Prediction   
    Announcing the Final Examination of Yevgeniy Sher for the degree of Doctor of Philosophy

    In this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with discrete/categorical data, with its representation playing a significant role in the performance of classification algorithms.

    We first present a neuroevolutionary approach which orders optimization phases to generate compiled programs with performance superior to those compiled using LLVM's -O3 optimization level. Performance improvements calculated as the speed of the compiled program's execution ranged from 27% improvement for the ccbench program, to 40.8% for bzip2.

    This dissertation then explores the problem domain of epitope prediction. This problem domain deals with text data, where protein sequences are presented as a sequence of amino acids. DRREP system is presented, which demonstrates how an ensemble of extreme learning machines can be used with string kernels to produce state of the art epitope prediction results. DRREP was tested on the SARS subsequence, the HIV, Pellequer, AntiJen datasets, and the standard SEQ194 test dataset. AUC improvements achieved over the state of the art ranged from 3% to 8%.

    We then present the SEEP epitope classifier, which is an SMV ensemble based classifier which uses contjoint triad feature representation, and produces state of the art classification results. SEEP leverages the domain specific knowledge based protein sequence encoding developed within the protein-protein interaction research domain. Using an ensemble of SVMs, and a sliding window based pre and post processing pipeline, SEEP achieves an AUC of 91.2 on the standard SEQ194 test dataset, a 24% improvement over the state of the art.

    Finally, this dissertation concludes by formulating a new approach for distributed representation of 3D biological data through the process of embedding. Analogously to word embedding, we develop a system that uses atomic and residue coordinates to generate distributed representation of residues. Preliminary results are presented where the Residue Surface Vectors, distributed representations of residues, are used to predict conformational epitopes and protein-protein interactions, with promising proficiency. The generation of such 3D BioVectors, and the proposed methodology, opens the door for substantial future improvements, and application domains.

    Committee in Charge: Shaojie Zhang (Chair), Damian Dechev (Co-Chair), Gary Leavens, Avelino Gonzalez, Degui Zhi

              Gemalto applies biometrics and machine learning to counter online banking fraud   
    Gemalto (Euronext NL0000400653 GTO), the world leader in digital security, is launching the Gemalto...
              Biometric identity platform AimBrain raises £4m   
    AimBrain, a London-based startup using machine learning to help financial services firms tap into bi...
               Reconstructing muscle activation during normal walking: a comparison of symbolic and connectionist machine learning techniques    
    Heller, Ben W. and Veltink, Peter H. and Rijkhoff, Nico J.M. and Rutten, Wim L.C. and Andrews, Brian J. (1993) Reconstructing muscle activation during normal walking: a comparison of symbolic and connectionist machine learning techniques. Biological Cybernetics, 69 (4). pp. 327-335. ISSN 0340-1200
              McAfee Stinger 12.1.0.2416   

    McAfee Stinger Icon


    McAfee Stinger is a portable removal tool utility used to detect and remove specific viruses. It's not a substitute for full anti-virus protection, but a specialized tool to assist administrators and users when dealing with an infected system.

    McAfee Stinger includes a real-time behavior detection technology (called "Real Protect") that monitors suspicious activity on an endpoint. It leverages machine learning and automated behavioral based classification in the cloud to detect zero-day malware in real-time.

    McAfee Stinger utilizes next-generation scan engine technology, including process scanning, digitally signed .DAT files and scan performance optimizations. It detects and removes threats identified under the “List Viruses” icon in the application.
    Read more »
              The Future Of The Web Is Audible   
    Like it or not the web has mostly been designed for those who can see it. The very nature of HTML and CSS is focused on how a web page looks, mostly disregarding our other senses. With the increasing popularity of wearable technology combined with advancements in machine learning, a [...]
              Suggested Sharing and Shared Libraries are rolling out in Google Photos today   

    Google announced a few cool things were coming to Google Photos back at I/O, but there was no date for the rollout. Apparently today is the day. Both Suggested Sharing and Shared Libraries are rolling out to all devices, employing Google's machine learning muscle to make it easier to share photos with friends and family.

    Suggested Sharing will be available in the new Sharing tab, which tracks all your photo sharing activity.

    Read More

    Suggested Sharing and Shared Libraries are rolling out in Google Photos today was written by the awesome team at Android Police.


              Multiple regions deep within the brain collaborate in empathetic and moral decision-making   
    It's a classic conundrum: while rushing to get to an important meeting or appointment on time, you spot a stranger in distress. How do you decide whether to stop and help, or continue on your way? A new study by neuroscientists at Duke and Stanford University sheds light on how the brain coordinates these complex decisions involving altruism and empathy. The answer lies in the way multiple areas of the brain collaborate to produce the decision, rather than just one area or another making the call. "The brain is more than just the sum of its individual parts," said Jana Schaich Borg, assistant research professor in the Social Science Research Institute and the Center for Cognitive Neuroscience at Duke. Using a technique that combines electrical monitoring of brain activity with machine learning, the team was able to tune into the brain chatter of rats engaged in helping other rats.
              Comment l’intelligence artificielle transforme nos métiers ?   

    L’intelligence artificielle s’illustre aujourd’hui au quotidien et se déploie rapidement dans la plupart des secteurs d’activités, bousculant ainsi les expertises humaines en entreprise. Si un emploi sur deux devrait s’en trouver transformé (Etude du Conseil d’orientation pour l’emploi (COE), janvier 2017), l’intelligence artificielle ne représente pour autant pas une menace pour ces métiers, qui devraient être redirigés vers des tâches moins répétitives et à plus forte valeur ajoutée. Selon une étude de PwC conduite en mars 2017, 70 % des métiers de l’énergie et 65 % des métiers de la consommation pourraient être automatisés via l’intelligence artificielle.

    L’arrivée de cette nouvelle technologie implique un changement dans la chaîne de valeur et, si elle ouvre la voie à de nouvelles compétences – comme par exemple la cybersécurité – elle représente aussi un défi majeur en termes d’adaptation des compétences et une véritable opportunité pour l’évolution des métiers. Ce défi, il revient à nous, dirigeants et managers, de le relever pour accompagner nos équipes dans cette profonde mutation : vaincre les peurs, accueillir l’innovation, transformer les postes de travail, former les équipes.

    Une opportunité plus qu’une menace

    L’adoption de robots et d’intelligence artificielle pourrait booster la productivité de 30 % dans les entreprises.

    Un argument de poids dans l’adoption de ces technologies, qui représentent un bouleversement majeur pour l’emploi dans l’adaptation des compétences mais également dans l’évolution des tâches quotidiennes de chaque métier. Une étape qui nécessite une importante phase de conduite du changement au sein des entreprises, pour accompagner les métiers concernés et aborder cette transition avec sérénité.

    En effet, la machine permet dès aujourd’hui à l’homme de prendre de nouvelles responsabilités, en travaillant par exemple à l’identification des tâches candidates à l’automatisation. Certaines tâches chronophages et abrutissantes peuvent ainsi d’ores et déjà être éliminées : les échanges avec le client les plus simples à traiter, comme les demandes de duplicata de facture dans le cas d’un litige, sont identifiées par la machine, qui répond automatiquement et en temps réel. L’entreprise gagne en vélocité – la machine étant bien plus réactive que l’homme – ainsi qu’en satisfaction client : le taux d’exactitude de la réponse augmente, la relation client s’en trouve donc améliorée.

    L’enjeu : faciliter la transition

    Aujourd’hui, l’intelligence artificielle en est à sa première phase de développement : l’intelligence dite « assistée ». Elle permet d’automatiser des tâches répétitives et, bien qu’elle ne révolutionne pas encore la nature des tâches en elle-même, s’enrichit et apprend via des algorithmes de Machine Learning. Demain, l’intelligence « augmentée » permettra de faire évoluer les tâches et d’échanger avec la machine directement, pour finalement parvenir à une intelligence « autonome », où les machines apprendront de façon continue pour automatiser la prise de décisions (comme par exemple dans le cas des véhicules autonomes ou des investissements intelligents).

    Les salariés 2.0, qui travaillent d’ores et déjà au quotidien avec l’intelligence artificielle, voient une occasion en or s’offrir à eux : celle d’apprendre à maîtriser cette technologie dès ses premiers pas, et d’en suivre l’évolution phase par phase. Une opportunité qui se transformera en véritable avantage concurrentiel lorsque l’I.A. sera devenue monnaie courante en entreprise et que les structures rechercheront des individus capables de travailler main dans la main avec ces nouveaux collaborateurs robotisés.

    Pour l’entreprise également, le défi à relever est important : quelles tâches assigner aux employés désormais libérés des actions quotidiennes les plus fastidieuses ? Vers quelles tâches à plus haute valeur ajoutée les orienter ? Comment repenser la relation client en réorganisant la part des échanges automatisés et la part laissée à l’humain, qui peut maintenant se consacrer à des enjeux et des comptes plus complexes ? Comment évaluer la performance dans ce nouveau contexte ? Autant de problématiques stratégiques qui nécessiteront une réponse dans les années – voire mois – à venir.

    Tout l’enjeu consiste à faciliter cette transition, en augmentant par exemple la flexibilité, les efforts de formation, le système de protection sociale 

    Les robots, garants de l’emploi ?

    Aider à prendre des notes, écrire des emails sous la dictée, suggérer des contacts, programmer les meetings, lancer un appel téléphonique, prioriser les tâches, gérer les réseaux sociaux… Autant de tâches quotidiennes chronophages qui seront, demain, prises en charges par l’intelligence artificielle.

    « L’objectif in fine est d’avoir plus de temps pour soi, pour sa famille, avoir plus d’interactions avec des personnes et prendre du temps pour s’engager dans des activités sociales », pour Catherine Simon, présidente et fondatrice d’Innorobo, le sommet européen entièrement dédié aux technologies robotiques.

    Pourtant, 65 % des Français se disent inquiets par l’autonomie croissante des machines d’après l’Ifop, notamment par son impact sur le marché de l’emploi. Une crainte « qui revient à chaque phase de crise du capitalisme », selon l’historien de l’économie François Jarrige. La robotisation représente finalement la poursuite du processus de mécanisation et d’automatisation entamé depuis la révolution industrielle et qui pourrait à terme fortement abaisser la pénibilité de l’emploi et en préserver la compétitivité.

    Un rapport du cabinet de conseil Boston Consulting Group (BCG) démontre que les taux de chômage des pays les plus robotisés (Allemagne, Corée du Sud, Brésil, USA) sont aussi parmi les plus bas ou les plus régulés – l’Allemagne est parvenue à une diminution de 4 % entre 2013 et 2014.

    L’apport de la robotique n’aurait donc pas d’impact négatif direct sur l’emploi et permettrait même de relancer certains secteurs en perte de vitesse : les investissements robotiques des industries automobiles allemande et japonaise ont par exemple permis de maintenir leurs positions sur le marché automobile et donc les emplois liés à cette filière. 1 million de robots industriels actuellement en service seraient déjà directement responsables de la création de 3 millions d’emploi selon une étude conduite par Metra Martech, et relayée par l’IFR. Le développement de la robotique dans les 5 prochaines années devrait encore créer 1 million d’emplois qualifiés dans le monde.

    La relation au client est réinventée, au même titre que la robotique a révolutionné le processus industriel dans les années 2010 et que le numérique a révolutionné la relation client en BtoC au début des années 2010, c’est aujourd’hui aux entreprises du secteur BtoB de faire leur révolution. Au-delà des métiers, c’est toute la relation aux clients qui doit évoluer. L’entreprise doit donner l’impulsion, en passant par ses salariés, de cette relation client réinventée. L’intelligence artificielle permet aujourd’hui aux entreprises de prendre ce virage et de repenser une relation client plus immédiate, plus directe et plus recentrée au cœur des problématiques business.

    Valérie Burel
    VP Customer Performance de Sidetrade
    Intelligence artificielle, robot
    Mots clés Google: 

              Pour ou contre l’intelligence artificielle ?   

    L’intelligence artificielle est la technologie du moment. Comment l’IA est-elle devenue la nouvelle grande tendance des éditeurs de technologies, notamment de sécurité. Faut-il craindre de l’IA ou au contraire espérer des nouvelles technologies utilisant des machines dotées d’une forme d’intelligence et de capacités d’auto-apprentissage. La tendance décryptée en 7 questions que toutes les entreprises doivent se poser aujourd’hui.

    Comment expliquer l’intérêt actuel du secteur de la sécurité pour l’intelligence artificielle ?

    Deux facteurs plus ou moins concomitants ont contribué à l’intérêt accru porté à l’intelligence artificielle (IA) dans le domaine de la sécurité.

    Tout d’abord, la technologie Big Data s’est généralisée et est devenue accessible au plus grand nombre. Le calcul n’est plus l’apanage des grands acteurs du secteur des nouvelles technologies et des instituts de recherche. L’augmentation de la puissance de calcul, en particulier grâce à des solutions de Cloud économiques et à des outils faciles à utiliser, a permis à un éventail beaucoup plus large d’utilisateurs d’appliquer des algorithmes sophistiqués de machine learning et d’intelligence artificielle pour résoudre leurs problèmes.

    En parallèle, les entreprises et les éditeurs de solutions de sécurité ont réalisé à quel point il était difficile de lutter contre des cybercriminels, capables de trouver sans cesse de nouveaux moyens d’infiltrer les réseaux d’entreprise sans se faire repérer. Pour les équipes informatiques, mettre à jour des règles prédéfinies et en créer de nouvelles constituent une solution extrêmement coûteuse et non viable face à des menaces spécifiques. Selon une récente étude menée par le Ponemon Institute, les coûts humains liés à la mise en œuvre et à la maintenance régulière d’un SIEM s’élèvent en moyenne à 1,78 million de dollars par an pour les entreprises. D’où la frustration ressentie par les équipes informatiques qui plébiscitent des solutions nécessitant le moins de personnalisation et de réglages possible, et dotée d’une capacité d’auto-apprentissage.

    Quels sont les principaux avantages des technologies d’intelligence artificielle ?

    L’IA offre deux avantages principaux.

    Premièrement, la plupart des solutions d’intelligence artificielle et de machine learning possèdent des facultés d’auto-adaptation, et exigent peu de personnalisation et de maintenance. Elles analysent la façon dont les choses se passent au sein d’un environnement donné et s’adaptent à la situation. Elles entraînent par ailleurs une baisse significative des coûts de maintenance.

    Deuxièmement, elles sont à même de détecter des problèmes et des attaques qu’elles n’ont pas été explicitement programmées pour identifier. C’est ce que nous appelons les menaces « inconnues ». Les professionnels au sein des entreprises peuvent ainsi espérer garder une longueur d’avance sur les attaquants dans le jeu du chat et de la souris qu’est la sécurité.

    Quelles sont les principales inquiétudes concernant l’adoption de l’IA ?

    Ces algorithmes prennent des décisions plus nuancées que les règles auxquelles nous sommes tous habitués. La question n’est plus de savoir si quelque chose est autorisé ou non, ni si une action est malveillante ou inoffensive. Nous entrons dans un univers de « probabilités » et de « seuils ».

    De plus, il existe très souvent un net décalage entre le mode de fonctionnement d’un algorithme et notre capacité à comprendre comment il a pu arriver à telle ou telle conclusion. Pour parvenir aux meilleurs résultats, un algorithme suit un processus qu’il est dans bien des cas impossible d’expliquer ou de saisir parfaitement. Si cette décision a des conséquences importantes, comme l’annulation d’une transaction, la suspension d’un compte ou le lancement d’une procédure d’investigation coûteuse, il est très frustrant de ne pas pouvoir comprendre rapidement et à 100 % les raisons qui ont motivé ce choix.

    Autre problème plus difficile à appréhender, mais tout aussi réel : l’IA est dépourvue de conscience et d’éthique. Elle se contente d’apprendre et de reproduire la façon dont les hommes prennent des décisions, ou optimisent des paramètres, afin de parvenir à un résultat optimal qui ne correspond pas toujours à celui que nous recherchons vraiment. Appliqués naïvement, les algorithmes peuvent amplifier nos préjugés et créer des systèmes discriminatoires vis-à-vis de certaines personnes, ou encore prendre des décisions qu’un être humain jugerait inacceptables sur le plan éthique. L’émergence des voitures sans chauffeur a, à cet égard, suscité un vif débat, mais les mêmes problèmes se posent dans d’autres domaines, notamment la cybersécurité.

    La tendance de l’IA va t’elle s’installer durablement dans le monde de la cybersécurité ? 

    L’intelligence artificielle est déjà la grande tendance et la technologie la plus en vogue. Et peut-être aussi la plus surestimée. Néanmoins, une chose est sûre : tout le monde en parle et nombreux sont ceux qui l’expérimentent. À mesure que nous progresserons dans l’utilisation de l’IA, le secteur cessera de la traiter comme une panacée ou une technique marketing accrocheuse, et finira par trouver à ces algorithmes un champ d’application adéquat.

    Nous continuerons à avoir besoin de structures et de mesures de contrôle traditionnelles, tout comme nous avons besoin à la fois de portes munies d’une serrure et de forces de police pour assurer notre sécurité physique, mais nous pourrons probablement alléger les contrôles en nous appuyant davantage sur des techniques d’analyse avancées.

    L’IA représente-t-elle le meilleur moyen de lutter contre le risque croissant de menaces internes ?

    L’IA est certainement une arme qui va occuper une place très importante dans l’arsenal de défense. En matière de menaces internes, la plus grande difficulté vient du fait que, pour accomplir leurs méfaits, les auteurs d’actes malveillants se servent des privilèges qui leur sont conférés dans le cadre normal de leurs fonctions. Limiter les accès, générer des journaux d’audit détaillés et renforcer la surveillance contribuera certes à réduire les risques, mais il restera toujours des employés qui auront besoin d’accéder à des données sensibles et qui pourront, parce qu’ils sont humains, commettre des actes malveillants ou faire l’objet de chantages. L’IA, et en particulier l’analyse comportementale, peuvent être utilisées pour reconnaître des changements dans les habitudes de travail et en informer en temps réel les équipes de sécurité.

    Comment gérer la synergie entre l’IA et la dimension humaine des opérations ?

    L’objectif n’est pas de remplacer les êtres humains, mais de leur permettre de consacrer leurs ressources à des activités qui revêtent une réelle importance. Les ordinateurs peuvent traiter rapidement d’énormes quantités de données et c’est à cette fin qu’ils doivent être utilisés. De leur côté, les êtres humains se comprennent mutuellement, perçoivent les intentions et communiquent les uns avec les autres. Les meilleurs outils d’intelligence artificielle nous déchargent des tâches subalternes fastidieuses et nous aident à résoudre des problèmes plus importants. Bien entendu, il faut garder à l’esprit qu’il s’agit là de moyens et non d’une fin : nous devons définir des objectifs et choisir les outils les mieux adaptés pour les atteindre.

    Sans l’IA, la cybersécurité peut-elle être vouée à l’échec ?

    Cette déclaration du Directeur de la NASA est tout à fait pertinente. Plusieurs arguments peuvent être avancés pour appuyer ce point de vue. Cependant, il n’y aura pas de retour en arrière. La sécurité reste une sorte de course aux armements et les attaquants continueront à mettre au point des programmes toujours plus sophistiqués et furtifs, ainsi que d’autres outils de piratage leur permettant d’infiltrer les réseaux en échappant à toute détection. Les équipes de sécurité devront poursuivre leurs efforts si elles ne veulent pas être vaincues.

    Péter Gyöngyösi
    Responsable Produit de l’éditeur de solutions de sécurité contextuelle, Balabit
    Intelligence artificielle
    Mots clés Google: 

              En 2017, l’intelligence artificielle au cœur de la bataille des bots   

    2016 aura définitivement marqué l’avènement des bots, ces petits robots capables de simuler une conversation avec l’utilisateur. Loin de s’estomper en 2017, cette tendance verra même arriver la « bataille des bots », où l’on commencera à séparer les « bons » des « mauvais » bots, où certaines marques rencontreront le succès tandis que d’autres échoueront, et où l’on se mettra à recenser les bonnes pratiques.

    Les entreprises, les fournisseurs, les analystes et les experts s’accordent tous sur le fait que les bots vont perdurer. Gartner prédit même qu’en 2019, 20% des marques abandonneront leurs applis mobiles et que, d’ici à 2020, une personne lambda aura plus de conversations avec des bots qu’avec son propre conjoint. Mais tous les bots ne sont pas créés sur le même modèle et ne sont donc pas voués au même destin.

    Pour les chercheurs de Nuance, les critères de réussite d’un « bon » bot dépendent de l’expérience client qu’ils sont à même de délivrer. Cette conviction est partagée par Forrester Research : « Les clients en viennent aujourd’hui à récompenser ou punir des marques sur la base d’une expérience unique qui forge leur impression à un moment donné. Ce comportement, d’abord caractéristique de la génération des Millennials, s’étend désormais aux générations précédentes. C’est devenu une pratique normale. »

    Ainsi, dans un futur où les bots vont se démultiplier et s’imposer petit à petit dans le quotidien des individus, toutes générations confondues, les chances de succès d’un bot seront déterminées par les caractéristiques suivantes :

    1.   L’intelligence artificielle de la conversation : pour être efficace, un bot doit pouvoir dialoguer intelligemment avec un consommateur sur le mode de la conversation bidirectionnelle. Tout comme un humain, le bot doit pouvoir interpréter le contexte, par exemple lorsque le consommateur change rapidement de sujet ou qu’il utilise des mots ou expressions familiers. La plupart des bots ne sont pas encore suffisamment sophistiqués pour cela. Certains pourront répondre à une requête basique du type « Quelle est la température à Miami ? » (réponse : « Il fait 30 degrés à Miami »), mais si le consommateur enchaîne la conversation et dit « Et à Beijing ? », la plupart des bots ne sauront pas faire le lien avec le contexte et ne comprendront pas que la question porte sur la météo.

    2.    L’intelligence artificielle cognitive : cela renvoie aux facultés de raisonnement d’un bot, qui l’aident à prendre une décision et à anticiper les besoins du consommateur. Si ces compétences sont propres aux humains, la précision des technologies peut s’approcher de l’effet « raisonnement humain ». Ainsi par exemple, les systèmes traditionnels de reconnaissance vocale comprennent ce que les gens disent, mais les systèmes de compréhension du langage naturel plus actuels et plus sophistiqués comprennent ce que les gens veulent dire et ce qu’ils souhaitent faire. La reconnaissance vocale et l’interprétation du discours en langage naturel se fondent toutes les deux sur le Big Data et sur une connaissance importante des intentions des clients.

    3.    L’intelligence artificielle avec assistance humaine : c’est ce que les professionnels appellent l’IA supervisée. Accompagnés au quotidien d’agents humains professionnels, les bots développent leurs compétences grâce à l’accélération du machine learning et apprennent ce qu’il faut des humains, à leurs côtés. Cet accompagnement évite que les bots soient directement et uniquement « face aux humains » et apprennent donc à réagir seuls – des pratiques qui conduisent à des erreurs dramatiques qu’on a pu observer dans certaines Unes de journaux cette année.

    4.    L’intégration omnicanale : des bots efficaces ne sont pas des applications autonomes, mais plutôt des outils globaux qui fonctionnent comme un cortex central et que l’on peut déployer à travers les multiples canaux qu’empruntent les consommateurs : applications de messagerie, applications mobiles, systèmes de téléphonie, Web, applications de chat et médias sociaux. Dans le cadre d’une stratégie omnicanale intégrée, les clients vivront une expérience cohérente quel que soit le canal qu’ils empruntent. Pour les entreprises, cela signifie aussi la fin des technologies en silos.

    5.    L’authentification intelligente et la sécurité : la biométrie vocale permet aux consommateurs de s’authentifier facilement et de façon naturelle sans même devoir taper de mot de passe ou code PIN. L’authentification passe simplement par la prononciation d’une courte phrase de passe, de type « Ma voix est mon mot de passe ». C’est la fin des codes PIN difficiles à mémoriser, et même des questions de sécurité comme « Quel est le prénom de votre meilleur ami d’enfance ? ». De plus, la biométrie vocale renforce nettement la sécurité par rapport aux méthodes traditionnelles d’authentification, et permet de mieux lutter contre la fraude. L’implémentation de cette méthode d’authentification sera également un critère de succès des bots à partir de 2017.

    La différence entre un « bon » et un « mauvais » bot se fera au niveau de l’expérience qu’il sera en mesure de fournir à l’utilisateur. Un bon bot sera synonyme de fluidité et d’intuitivité au niveau du dialogue, donnant presque à l’utilisateur l’impression de s’exprimer avec une personne physique dotée d’intelligence. A l’origine, l’objectif d’un bot était de simplifier le rapport humain/technologie et d’améliorer l’expérience face aux machines. Au fur et à mesure de leur démocratisation, il s’agira de les rapprocher au plus près de l’expérience humaine. La différence entre les simples techniques de reconnaissance vocale et les technologies de compréhension du langage au service de l’intelligence artificielle se fera alors plus que jamais ressentir. 

    Scott Wickware
    Senior Executive et Board member de Nuance Communications
    Intelligence artificielle, bots
    Mots clés Google: 

              L'Intelligence Artificielle ou l'accomplissement des utopies   

    "L'Intelligence Artificielle (IA) est le domaine de l'informatique qui étudie comment faire faire à l'ordinateur des tâches pour lesquelles l'homme est aujourd'hui encore le meilleur" (1).

    Après l'euphorie presque utopique des années 60-70 et les espoirs déçus des années 1980 qui ont vu reculer l'approche symbolique, l'IA a su renaitre de ses cendres à cette même période au travers d'une approche connexionniste qui voit l'avènement de systèmes multi-agents, de mémoires auto-associatives et de réseaux de neurones artificiels (RNA) performants.

    Une soixantaine d'années de recherches et d'avancées majeures font ainsi de l'IA un puissant vecteur de transformation du monde qui bouleverse aujourd’hui l'ensemble des activités humaines, l'entreprise et les modèles économiques. Oscar Wilde considérait le progrès comme l'accomplissement des utopies. C'est finalement cette approche qui convient le mieux aux évolutions actuelles de l'intelligence artificielle qui augure les plus grandes innovations.

    L'apprentissage machine (Machine Learning)

    Apparus au début des années 1950, les réseaux de neurones constituent les éléments fondateurs de l'apprentissage automatisé. Grâce à eux, un programme est désormais capable "d'apprendre" et d'améliorer ses réponses par l'expérience. C'est cette capacité d'apprentissage (supervisé ou non supervisé) transférée à une machine qui révolutionne les pratiques numériques et font le succès de l'IA. Les progrès de l'IA impactent en effet l'ensemble des activités humaines, de l'industrie aux services, de la santé à l'enseignement, de l'agriculture aux transports, de la sécurité à la défense.

    Aucune expertise ne peut se prévaloir aujourd'hui de spécificités qui la rendrait incompatible avec les capacités fonctionnelles de l'IA. Accompagnant l'augmentation des puissances de calculs (la loi de Moore), l'IA constitue le principal moteur de la révolution numérique, premier enjeu des entreprises.

    Elle fait l'objet d'une course à l'innovation de la part des grands acteurs du domaine. Qu'ils soient privés ou étatiques, ces acteurs ont parfaitement mesuré le caractère "stratégique" de son développement et tentent pour cela d'imposer leurs normes en mettant à disposition des plates-formes de briques algorithmiques "Opensource". D'une manière générale, l'IA permet d'exploiter de façon pertinente les mégadonnées (Big Data) issues des capteurs, des objets connectés, et toutes les données produites sur internet et les réseaux sociaux.

    L'apprentissage machine se révèle ainsi des plus performants dans de nombreuses tâches : traitement du signal, maîtrise des processus, robotique, classification, pré-traitement des données, reconnaissance de formes, analyse de l'image et synthèse vocale, cybersécurité, diagnostics et suivi médical, marché boursier et prévisions, demande de crédits et de prêts immobiliers, recrutement et analyse automatique de cv...

    Un savoir-faire européen et une French Tech hyperactive

    En matière d'IA, les géants américains GAFAM (Google, Apple, Facebook, Amazon, Microsoft) occupent une position dominante.  Ce leadership ne doit pourtant pas masquer le fort potentiel européen et l'excellence française, régulièrement reconnus à l'international.

    En choisissant d'implanter à Zurich son groupe de recherche (GRE) dédié au Machine Learning et en confiant sa direction au français Emmanuel Mogenet, Google mise pleinement sur l'excellence européenne.  Sa filiale londonienne Google Deep Mind, fleuron mondial de l'IA, enchaîne les succès d'innovation avec notamment les victoires d'AlphaGo contre le champion du monde Lee Sedol.

    Facebook a installé ses trois laboratoires "Facebook Artificial Intelligence Research (FAIR)" à Paris, dirigés par le français Yann Le Cun, considéré comme l'un des meilleurs spécialistes au monde du Deep Learning. Ces implantations "stratégiques" dessinent un axe européen de l'IA qui témoigne de l'intérêt des GAFAM pour le savoir-faire européen.

    L'Europe et notamment la France font preuve d'un réel dynamisme dans la création de startups centrées sur l'Intelligence Artificielle. De nombreux élèves ingénieurs et doctorants travaillent pendant leur scolarité sur un projet embarquant de  l'IA puis concrétisent ce projet par la création d'une startup soutenue par l'incubateur de l'école d'ingénieurs. Ce mode opératoire (qui a fait ses preuves)  permet d'accompagner efficacement l'entreprise et de la stabiliser durant ses premiers mois d'existence. 

    Parmi les startups françaises qui ont fait le pari de l'Intelligence Artificielle, on peut citer Alkemics pour une connexion intelligente des marques et des distributeurs afin de mieux servir l’expérience omnicanal des consommateurs, Blue Frogs Robotics pour les robots compagnons (Buddy), Cardiologs Technologies pour la prise en charge des pathologies cardiaques, Elum Energy pour la gestion intelligente de l'énergie photovoltaïque, Scortex et Craft.Ai pour l'application de l'IA aux objets connectés, Julie Desk pour l'assistant personnel, ou encore Smart Me Up pour la reconnaissance faciale en temps réel. 

    On notera que plusieurs startups de cette liste ont remporté des prix d'innovation en 2015 et 2016. Soutenues par des incubateurs académiques (ParisTech Entrepreneurs, X-UP l'accélérateur de l'École polytechnique...), ces startups font preuve aujourd'hui d'un fort dynamisme susceptible d'inspirer les différents acteurs de l'économie numérique et les décideurs politiques. Les grands groupes industriels français doivent eux-aussi jouer leur rôle en acceptant le risque et en rachetant ces startups lorsqu'elles sont mises en vente pour ne plus laisser s'échapper des concentrés d'excellences technologiques.

    La France ne manquera pas le train de l'Intelligence Artificielle. Elle n'a pas d'autres choix que de soutenir cet écosystème en créant un environnement favorable à l'innovation numérique. Elle dispose pour cela d'un vivier de compétences et d'expertises unanimement reconnu qui devrait favoriser une transformation réussie de ses entreprises et par là, de l’économie française !

    1Elaine Rich et Kevin Knight – Artificial Intelligence – McGraw-Hill

    Eric Cohen
    Fondateur & PD-G de Keyrus
    Intelligence artificielle, Machine learning

              Machine learning, la clé de l’intelligence artificielle ?   

    Les scientifiques ont commencé à travailler sur l’intelligence artificielle dans les années 50. La recherche avance probablement moins vite que dans les films de science-fiction mais s’il est un secteur dans lequel les développements sont réels, il s’agit bien du Machine learning, soit l’apprentissage automatique.

    Le Machine learning consiste à apporter aux outils informatiques tels que les PC la capacité d’apprendre sans qu’ils soient explicitement programmés pour cela. Ainsi l’apprentissage automatique va par exemple aider les Smartphones à comprendre la voix humaine, permettre de conduire des voitures sans chauffeurs ou encore fournir des réponses plus pertinentes et rapides aux questions posées sur des moteurs de recherches.

    Permettre aux machines d’apprendre par elles-mêmes est complexe. Les machines sont capables de mettre en œuvre des taches programmées très rapidement et très précisément, mais sans aucune faculté de raisonnement. Voilà pourquoi les machines sont les meilleurs outils qui soient pour effectuer des tâches de calculs de hautes performances par exemple.

    Mais les performances des machines s’arrêtent lorsqu’un problème ne peut pas être traduit en règles simples et logiques, et que les programmeurs ne savent pas quelles commandes demander aux machines.

    Les solutions d’apprentissage automatique permettent de révéler des tendances et des modèles sur la base de données et cela de manière très précise. Couplés à des solutions de récolte et de recoupement de données, les algorithmes sont aussi capables de créer des prévisions dans le futur.

    Deux techniques principales existent :

    ·        Le Machine learning supervisé : lorsque les  événements doivent être triés dans des catégories connues, basées sur les exemples d’événements réels. Ex : le système de recommandation de produits proposés par les sites e-commerce tels qu’Amazon est un excellent exemple de machine learning supervisé. Le système recommande des livres, des CD ou autres produits - romans de science-fiction, CD de Jazz, etc. - à des utilisateurs sur la base de leurs habitudes ou de celles d’utilisateurs présentant des profils similaires.  
    ·        Le Machine learning non supervisé : lorsque la machine ne dispose d’aucun exemple et que les catégories ne sont donc pas connues. Ex : Tri automatique de données basées sur des similitudes ou dissimilitudes.  
     

    Le Machine learning dans la sécurité informatique

    En marge des usages mentionnés ci-dessus, l’apprentissage automatique des machines devient progressivement un outil utilisé dans le monde de la sécurité IT. La raison : une nouvelle tendance qui fait de la surveillance des utilisateurs le point central de la sécurité, plus que le contrôle ou les terminaux de surveillance. Les applications se concentrant sur le contrôle peuvent être très efficaces contre des virus ou des malwares connus mais le sont beaucoup moins contre les menaces APT (menaces persistantes avancées). Une attaque APT typique implique un attaquant exploitant une vulnérabilité zero-day et installant un keylogger sur l’ordinateur de l’utilisateur. Depuis que les solutions de SIEM (Security Information and Event Management) ne sont plus capables de protéger contre les vulnérabilités zero-day, cette attaque est devenue quasiment indétectable et impossible à prévenir. C’est la raison pour laquelle les entreprises de sécurité les plus agiles ont commencé à développer leurs propres solutions d’analyse comportementale des utilisateurs (UBA - User Behavioral Analytics). 

    L’analyse comportementale en pratique

    Le concept principal des solutions d’analyse comportementale - UBA - est très simple. Tout comme les parents reconnaissent et distinguent leurs enfants des autres sur la base de détails comportementaux simples tels que leur démarche ; un logiciel d’analyse comportementale est capable de reconnaître des utilisateurs sur la base de caractéristiques qui leur sont propres et de détecter s’ils réalisent des choses étranges - et cela même si la personne qui est derrière le compte utilisateur est un attaquant externe qui a volé et utilise les identifiants valides de l’utilisateur.

    Les solutions d’analyse comportementale disposent de nombreuses données leur permettant de détecter des activités inhabituelles, comme le lieu et l’endroit de connexion, la résolution d’un écran et l’OS d’un terminal, la liste des applications et protocoles régulièrement utilisés, la vitesse de frappe sur un clavier.

    Alors que ces données ne sont habituellement pas utilisées par les outils de sécurité traditionnels, les solutions d’analyse comportementale bénéficiant de l’apprentissage automatique peuvent transformer cette masse de données en une intelligence utilisable.

    En pratique, l’analyse du comportement des utilisateurs permet de contrer des attaques jusque là difficiles à détecter. Un employé qui démissionne peut être tenté de collecter de gros volumes de données corporate confidentielles - qu’il enregistre sur une clé USB pour les transporter. Ce comportement étant catégorisé comme inhabituel sur la base du profil de l’utilisateur, une solution d’analyse comportementale peut envoyer une alerte à l’équipe de sécurité et stocker les détails de l’événement fournissant ainsi une preuve légale du comportement malveillant.

    Daniel Bago
    Responsable Marketing Blindspotter chez BalaBit IT Security
    Machine learning, Intelligence artificielle

              Software Engineer - Computer Vision/Machine Learning Expert - Uber - Boulder, CO   
    About the TeamUber, Advanced Technologies, Engineering - Imagery is the Louisville, CO division of the Uber Engineering Team:....
    From Uber - Sat, 22 Apr 2017 14:05:27 GMT - View all Boulder, CO jobs
              Box expands Microsoft cloud partnership, inks co-sell pact   
    Box will also incorporate more of Microsoft's machine learning and artificial intelligence services into its platform.
              Big Data Developer - Intelligent Solutions - JP Morgan Chase - New York, NY   
    Experience with machine learning is preferred. Knowledge of industry leading Business Intelligence tools. Consistently evaluates decisions in terms of impact to...
    From JPMorgan Chase - Tue, 09 May 2017 10:34:19 GMT - View all New York, NY jobs
              Senior Manager, Data Engineering - Capital One - Chicago, IL   
    Experience delivering business solution written in Java. Data mining, machine learning, statistical modeling tools or underlying algorithms....
    From Capital One - Sat, 17 Jun 2017 18:47:28 GMT - View all Chicago, IL jobs
              Agile Software Developer - Subject Matter Expert (ASD-SME) (C) with Security Clearance - Metronome LLC - Springfield, VA   
    Expertise with Informatica, Syncsort DMX-h and Ab Initio desired *. Expertise with machine learning, data mining and knowledge discovery desired *....
    From ClearanceJobs.com - Sun, 04 Jun 2017 18:11:10 GMT - View all Springfield, VA jobs
              Nutanix and Google Cloud team up to simplify hybrid cloud   

    Today, we’re announcing a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises. We often hear from our customers that they’re looking for solutions to deploy workloads on premises and in the public cloud.

    Benefits of a hybrid cloud approach include the ability to run applications and services, either as connected or disconnected, across clouds. Many customers are adopting hybrid cloud strategies so that their developer teams can release software quickly and target the best cloud environment for their application. However, applications that span both infrastructures can introduce challenges. Examples include difficulty migrating workloads such as dev-testing that need portability and managing across different virtualization and infrastructure environments.

    Instead of taking a single approach to these challenges, we prefer to collaborate with partners and meet customers where they are. We're working with Nutanix on several initiatives, including:

    • Easing hybrid operations by automating provisioning and lifecycle management of applications across Nutanix and Google Cloud Platform (GCP) using the Nutanix Calm solution. This provides a single control plane to enable workload management across a hybrid cloud environment.

    • Bringing Nutanix Xi Cloud Services to GCP. This new hybrid cloud offering will let enterprise customers leverage services such as Disaster Recovery to effortlessly extend their on-premise datacenter environments into the cloud.

    • Enabling Nutanix Enterprise Cloud OS support for hybrid Kubernetes environments running Google Container Engine in the cloud and a Kubernetes cluster on Nutanix on-premises. Through this, customers will be able to deploy portable application blueprints that target both an on-premises Nutanix footprint as well as GCP.

    In addition, we’re also collaborating on IoT edge computing use-cases. For example, customers training TensorFlow machine learning models in the cloud can run them on the edge on Nutanix and analyze the processed data on GCP.

    We’re excited about this partnership as it addresses some of the key challenges faced by enterprises running hybrid clouds. Both Google and Nutanix are looking forward to making our products work together and to the experience we'll deliver together for our customers.


              Software Development Manager - AFT Entropy Management Tech - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
    We operate at a nexus of machine learning, computer vision, robotics, and healthy measure of hard-earned expertise in operations to build automated, algorithmic...
    From Amazon.com - Tue, 27 Jun 2017 14:12:51 GMT - View all Toronto, ON jobs
              Software Engineer - Computer Vision/Machine Learning Expert - Uber - Boulder, CO   
    About the TeamUber, Advanced Technologies, Engineering - Imagery is the Louisville, CO division of the Uber Engineering Team:....
    From Uber - Sat, 22 Apr 2017 14:05:27 GMT - View all Boulder, CO jobs
              Healthcare IT startups to watch: Running list of big news   
    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Startups%20to%20watch_0.jpg
    Slideshow Title: 
    Healthcare IT startups to watch in 2016: Running list of big news
    Slideshow Description: 

    From virtual care platforms to precision medicine, data analytics to interoperability, the healthcare IT landscape is constantly changing thanks to new approaches driven by entrepreneurs making waves in the sector.

     

     

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Mark%20Nathan%2C%20Zipari%20CEO.jpg
    Slideshow Description: 

    Mark Nathan, co-found and CEO of Zipari

    Health insurance tech startup Zipari, nabbed $7 million in its first round of funding led by Vertical Venture Partners. The company will use the cash to meet the expanding demand for its suite of customer relationship management-centered software as a service for the health insurance industry.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/augmedixhitn.png
    Slideshow Title: 
    Google Glass startup Augmedix scores $23 million from McKesson Ventures, others
    Slideshow Description: 

    The San Francisco company's focus is a smartglass-powered remote scribe tool to assist physicians with charting and documentation. Augmedix co-founders Ian Shakil, left, and Pelu Tran.

    Read the most recent article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Stephan%20Chenette%20and%20Rajesh%20Sharma.jpg
    Slideshow Title: 
    Cybersecurity companyAttackIQ lands $8.8 million in Series A funding
    Slideshow Description: 

    AttackIQ will use the $8.8 million garnered in its first round of funding to expand its partner, sales and marketing initiatives, and build out its strategic services and engineering teams.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Mike_baird_500x_0.jpg
    Slideshow Title: 
    Telehealth startup Avizia to expand engineering team, market reach with new funding
    Slideshow Description: 

    Avizia client New York Presbyterian, which has been a leader in video consults and telehealth, participated in this most recent $6 million investment that adds to the $11 million Avizia raised back in July.  As Avizia CEO Mike Baird sees it, telehealth is a proven way for hospitals to close gaps in care and reduce unnecessary ER visits.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Moxe.jpg
    Slideshow Title: 
    Moxe Health promotes data sharing between payers and providers
    Slideshow Description: 

    "The rules of healthcare are quickly being re-written, as technology presents an opportunity to facilitate more meaningful interactions between payers and providers," Moxe founder and CEO Dan Wilson says, "We enable workflows that are beneficial to both sides of the equation and focus on delivering patient health insights to providers while reducing administrative excess."

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Welltok%20Jeff%20Margolis%20headshot%202.jpg
    Slideshow Title: 
    Welltok grabs $33 million to advance CafeWell population health tool
    Slideshow Description: 

    Welltok pulled in $33.7 million in a new round of funding and said it plans to use the investment to build out its CafeWell Health Optimization Platform. CaféWell enables population health managers to coach and inspire their clients to get healthier. The enterprise-level platform curates and connects consumers with benefits, resources and rewards, and it provides personalized action plans for each individual.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Josh-Benner_0.jpg
    Slideshow Title: 
    RXAnte gets UPMC boost
    Slideshow Description: 

    UPMC Enterprises, the commercialization arm of UPMC, has purchased all of Millennium Health’s interest in Portland, Maine-based RxAnte. The investment will go toward product development with in-house clinical expertise and accelerating growth. Founded in 2011, RxAnte manages medication use for nearly 7 million people on behalf of health insurers, providers and other stakeholders working to improve safe and effective prescription drug use. Josh Benner wil continue as CEO.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/George%20Church.jpg
    Slideshow Description: 

    ReadCoor spins from Harvard after a $23M first round of funding

    ReadCoor will commercialize the Wyss Institute’s FISSEQ – fluorescent in situ sequencing – technology. The startup has developed an imaging platformthat provides insight into cancer, infectious diseases, cognitive disorders and more. A team headed by Wyss core faculty member and ReadCoor co-founder George Church, invented and developed the platform.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Lorenz_Bolz%28l2r%29.jpg
    Slideshow Description: 

    Klara lands $3M in funding to further develop its HIPAA-compliant messaging platform for medical teams to centralize all patient-related communication in one place

    Klara co-founders Simon Lorenz, left, and Simon Bolz, launched the company in 2014, They describe the technology as a "WhatsApp" for medicine.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Travis%20Good%2C%20MD_1.jpg
    Slideshow Description: 

    Catalyze, a HITRUST certified cloud provider, has raised $6.5 million in a Series B funding round. "Customers have our team the unique opportunity to solve a multitude of data exchange challenges that fall outside of traditional standards," says Catalyze CEO Travis Good, MD, pictured above.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Cernostics%20CEO%20Mike%20Hoerres_0.jpg
    Slideshow Description: 

    Cernostics CEO Mike Hoerres

    Oncology diagnostics company Cernostics has pulled in a $5 million round of funding led by UPMC Enterprises, the commercial arm of UPMC. The funding will go toward growing and accelerating a new diagnostic test for people with Barrett’s Esophagus, a condition that can lead to cancer.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/J.%20Patrick%20Bewley.png
    Slideshow Description: 

    J. Patrick Bewley, CEO of Big Cloud Analytics

    The startup’s COVALENCE Analytics Platform is designed to simplify healthcare and help enterprises better manage population health. The Atlanta-based startup, which offers real-time predictive analytics technology for the Internet of Things, has raised $4.5 million in first round fo funding.

    Read the story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/combine_images.jpg
    Slideshow Description: 

    CareSkore co-founders CEO Jaspinder Grewal and Puneet Dhillon Grewal, MD, chief medical  officer

    CareSkore, a population health management technology vendor, has raised $4.3 million in its initial round of funding. And former San Francisco 49ers quarterback Joe Montana is part of the team of investors backing the upstart.

    Read the story,

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Manoj%20Saxena_0.jpg
    Slideshow Description: 

    Manoj Saxena, CognitiveScale executive chairman

    CognitiveScale revealed a $21.8 million round of financing to advance its industry-specific machine intelligence software. “This funding will accelerate our mission to bring scalable, practical AI to the enterprise,” Saxena said.

    Read the story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Accolade%27s%20Raj-Singh-1_0.jpg
    Slideshow Description: 

    Accolade CEO Rajeev Singh

    Seattle-based Accolade, an on-demand healthcare concierge offering for employers, health plans and health systems, has raised $71.1 million to ramp up its technology platform. Accolade’s model combines personalized service with clinical support and consumer engagement technologies to uncover inefficient healthcare utilization and its impact on healthcare costs and outcomes. Cost savings range from 5-15 percent, Accolade said.

    Read the story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/redox_2.jpg
    Slideshow Title: 
    Redox
    Slideshow Description: 

    Luke Bonney Niko Skievaski and James Lloyd founded Redox in 2014. The Epic alumni who run Redox are aggressive about interoperability, and they claim it's easier to achieve than it seems. They call it "turnkey interoperability." Most recently Redox has integrated its health apps with Epic, Cerner and eClinicalWorks, among others.

    Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/mostashari_0.png
    Slideshow Description: 

    Aledade founder Farzad Mostashari, MD

    Aledade, former ONC chief and physician Farzad Mostashari’s accountable care organization startup, is 'steady as she goes' as it enters its third year.

    Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Matt-Tindall_240px.png
    Slideshow Title: 
    Omicia CEO Matt Tindall
    Slideshow Description: 

    Omicia will expand HIPAA-compliant, cloud-enabled platform for research, population health, clinical trials. The startup landed $23 million in its Series B financing round, completed on June 8. UPMC led the funding.

    Read the full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Tom%20Dorsett_0.png
    Slideshow Title: 
    Tom Dorsett, CEO of ePatientFinder
    Slideshow Description: 

    ePatientFinder announced on June 9, it had raised $8.2 million tto build out its Clinical Trial Exchange platform. The EHR agnostic cloud-based service enables doctors to locate new treatment options, preventative procedures and clinical trials for their patients.

    Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Elad%20Benjamin_0.jpg
    Slideshow Title: 
    Elad Benjamin, CEO of Zebra
    Slideshow Description: 

    Intermountain led a $12 million funding round that Zebra said it will use to build out its analytics engine with machine learning algorithms for diagnosing imaging scans.

    Read full story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/iBeat.png
    Slideshow Title: 
    Practice Fusion veterans announce IBeat wearable-as-a-service
    Slideshow Description: 

    The forthcoming cloud service will monitor a user’s heart activity around the clock, according to CEO Ryan Howard.

    Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Darren%20Schulte%20Apixio%20Crop_0.jpg
    Slideshow Title: 
    Apixio CEO Darren Schulte raises $19 million venture capital to advance cognitive computing
    Slideshow Description: 

    The data science company said it will use the investment money to develop applications for care and quality measurement. 

    Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Bryan%20Haardt-HITN_0.png
    Slideshow Title: 
    Decisio Health introduces clinical platform, draws $4.5M in Series A round
    Slideshow Description: 

    Decisio Health, a startup that aims to help acute-care provider organizations continually improve their clinical processes, launched the Decisio Health Clinical Intelligence Platform on May 17 and also announced $4.5M in Series A funding. The new platform is based on  technology developed at the University of Texas Health Center. Read full story.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Andrew%20Kress%20and%20%20Andrew%20Goldberg.jpg
    Slideshow Description: 

    Andrew Kress, left, cofounder and CEO of HealthVerity and Andrew Goldberg co-founder and COO

    HealthVerity’s technology enables customers to rapidly discover, license and assemble patient data from a wide range of traditional and emerging healthcare data sources that can aid pharmaceutical, hospital and payer organizations seeking to enhance patient insights from existing and new data sources. The startup has landed $7.1 million in its first round of funding.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/rubenhitn.png
    Slideshow Description: 

    Ruben Amarasingham, MD

    Pieces Technologies landed $21.6 million in its first round of funding in March 2016. The investment will help the fledgling company advance its cloud-based population health management tools, said CEO and founder Ruben Amarasingham, MD. Pieces Tech’s software platform, incubated at the Parkland Center for Clinical Innovation, provides integrated monitoring, prediction, workflow optimization and organizational learning services specifically for hospitals and health systems.

    Read the article.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Founghitn.png
    Slideshow Description: 

    Alejandro Foung, Lantern co-founder and CEO

    Lantern, a San Francisco-based startup, with 17 employees, is working with UPMC Enterprises, the commercialization arm of the Pittsburgh-based healthcare giant, to further develop the company’s online mental health wellness services and products.

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/Dan%20Burton.jpg
    Slideshow Description: 

    Health Catalyst CEO Dan Burton

    Health Catalyst has raised $70 million in its fifth round of funding, bringing the total of venture capital it has attracted to $235 million.

    Norwest Venture Partners, the lead investor in three previous rounds of funding, and UPMC Enterprises, the commercialization arm of UPMC, co-led the round. UPMC is also a Health Catalyst customer and technology development partner.

    Read the story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/zipnosispearcehitn.png
    Slideshow Description: 

    Zipnosis CEO Jon Pearce

    Zipnosis, a startup that provides virtual care platforms, has raised $17 million in its Series A financing round to speed product development. Zipnosis describes its offering as a platform that empowers health systems to launch proprietary branded virtual care service lines staffed with their own clinicians. The goal is to maximize the clinician's time and ensure clinically appropriate patient outcomes.

    Read the story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/flatironhitn.png
    Slideshow Description: 

    Nat Turner and Zach Weinberg, co-founders of Flatiron Health

    In recent months New York-based Flatiron Health opened an office in San Francisco, completed a second round of funding – $130 million – in May 2014, and doubled down on using data to work on eradicating cancer. The company also joined forces with another oncology company to work on the next generation of cloud-based, electronic health record, data analytics and decision support software for cancer care providers around the world.

    Read the story

    Slideshow Image: 
    http://www.healthcareitnews.com/sites/default/files/soonshionhhitn.png
    Slideshow Description: 

    Patrick Soon-Shiong, founder and CEO of NantHealth

    Patrick Soon-Shiong, businessman, surgeon, scientist and founder of health IT company NantHealth announced back in July 2015 that he planned to take the company public by the end of the year. "We feel we have one or two transactions to accomplish, then we will initiate the public offering that we anticipate will happen probably within this year," Soon-Shiong, was quoted in the Los Angeles Times. The health IT company aims to solve the interoperability crisis and also promises to take genomics and clinical decision support to a new level. We’re still watching for an IPO in 2016.

    Read the story

    Teaser: 

    From virtual care platforms to precision medicine, data analytics to interoperability, the healthcare IT landscape is constantly changing thanks to new approaches driven by entrepreneurs making waves in the sector.

    The following gallery highlights some of those emerging companies and people who made news in 2016. Check back often as we will be updating the collection regularly.

    Thumbnail: 
    Health IT startups watch
    Custom OAS pagetag: 
    Subheader: 
    From virtual care platforms to precision medicine, data analytics to interoperability, the healthcare IT landscape is constantly changing thanks to new approaches driven by entrepreneurs making waves in the sector.
    Specific Terms: 

              Google Photos can now use machine learning to share your pics   
    Reported by TechRadar 32 minutes ago.
              Analysis of Variance of Cross-Validation Estimators of the Generalization Error   
    This paper brings together methods from two different disciplines: statistics and machine learning. We address the problem of estimating the variance of cross-validation (CV) estimators of the generalization error. In particular, we approach the problem of variance estimation of the CV estimators of generalization error as a problem in approximating the moments of a statistic. The approximation illustrates the role of training and test sets in the performance of the algorithm. It provides a unifying approach to evaluation of various methods used in obtaining training and test sets and it takes into account the variability due to different training and test sets. For the simple problem of predicting the sample mean and in the case of smooth loss functions, we show that the variance of the CV estimator of the generalization error is a function of the moments of the random variables Y=Card(Sj ∩ Sj') and Y*=Card(Sjc ∩ Sj'c), where Sj, Sj' are two training sets, and Sjc, Sj'c are the corresponding test sets. We prove that the distribution of Y and Y* is hypergeometric and we compare our estimator with the one proposed by Nadeau and Bengio (2003). We extend these results in the regression case and the case of absolute error loss, and indicate how the methods can be extended to the classification case. We illustrate the results through simulation.
              Comment on Hyperspheres & the curse of dimensionality by Machine Learning – An Introduction – Part 2 | Vincent-Philippe Lauzon's blog   
    […] (22-06-2017):  See Hyperspheres & the curse of dimensionality article for a detailed […]
              Comment on Machine Learning – An introduction – Part 1 by Hyperspheres & the curse of dimensionality | Vincent-Philippe Lauzon's blog   
    […] previously talked about the curse of dimensionality (more than 2 years ago) related to Machine […]
              Comment on That social mobility report by TemporarilyAnonymousCoward   
    Ta, Bongo. BiSw Much truth in that. Quite often someone asks me for something that sounds terribly complicated, or I can see a really "nice" sophisticated way of doing it - then after a bit of thought I figure out I can knock something off using GCSE-level maths and high-school level IT skills, and if it works, people are more than happy to pay for it. I've worked for some serious people, household names, and less well-known types like venture capitalists - shove them an Excel spreadsheet I could have produced in Year 11, that does exactly what they said they wanted to do but couldn't figure out how to do it, and they're more than happy to stuff the wonga in my direction. They were stuck, you got them unstuck, job done. (It's not just me. Old mate of mine is Oxbridge-educated, STEM degree, and in the UK on a "highly skilled" visa working for a multinational logistics firm. Not done anything beyond basic Excel for years. Moans they're bored out of their skull, never does anything they couldn't have done at 14. Dude! You have come from your poor-as-mud homeland that you tell anyone in earshot you hate the backwardsness of, earning a London salary, living in one of the world's great global cities, and you're moaning that work <i>isn't hard enough</i>? Chill! Proper <i>bona fide</i> geeks are weird like that.) Had a research student need help with some stats analysis, terrified of the SPSS she was being forced to use. Wanted all this complicated stuff that more senior researchers were badgering her to do - would mean going into Syntax Editor and writing some custom code, no way on Earth would she understand it. Not so intimidating for me because I can code, but neither the syntax nor its output would have been in the least bit comprehensible to her. Took a thinky-pause, told her to drop it, stick to a few simple methods I had learned to execute in the GUI within the first hour I ever saw SPSS, taught her how to use the menus and interpret the results and within a couple of hours she could do it all for herself. Freed up her time to focus on the qualitative stuff she was clearly better at. Did her esteemed supervisors care that she had junked their advice and stuck to the easy-peasy things? Nope, the research won a prize from the (Global Top 10) uni she was based at. Cos it was simple, interpretable, correct, and the person who wrote it eventually understood what she was doing. But I think a lot of other people in my situation would have made the wrong call, because otherwise <i>the work isn't hard enough</i>. (And the client said it, so obviously the client wanted it! And a world-renowned professor had suggested it to her, so it <i>had</i> to be a good idea!) So here's the funny thing. If I were a code junkie churning out SPSS Syntax for fun, obviously I could charge a higher hourly rate. In some fields of expertise, law or machine learning learning for example, I could probably stick a zero on the end of my pay rate. But still <i>somebody</i> would have to do my work. And on current trends they'd still get paid well enough for it. Look, I'm in the richest 10% of one of the richest countries on the planet, am considering the practicalities of retirement or at least financial independence at 40, can hardly complain about the pay can I? Particularly when it is a major event for me to ever have to cart out higher-level skills than bog-standard first-year undergrad stuff. I honestly thought this sort of grunt work was meant to have disappeared by now, automated or outsourced off to India or the Philippines. Argued about this with a mate 15 years ago, he wanted to go into programming and I said he was mad. In 10, 20, 30 years, how many programming tasks did he think would be done in Bangalore? Safer to be a plumber. He convinced me he was sane. If your skills are world-class, nobody outsources you, though you may need to go to where the work is. He was proven right, in the wallet, where it matters - now earns his millions (literally) in Silicon Valley, far cry from suburban England. But I'm not sure we called it spot-on; implicit for us in the idea of the rewards of a globalised economy going to those with the global-level skills, was the idea that if you didn't stay razor-sharp then you were literally in the firing line. All these mushy undergrad-level skills were going to be near-worthless because they were going to be bid down to the price of the global lowest-bidder, and you don't want to get into that kind of race with someone whose living costs are a tenth of yours. Now, there are plenty of unemployed middle-aged IT contractors with unpublishable opinions about Indian techies, so I'm not saying there's been no effect. But there's still a whole marketplace of skills and jobs that haven't been either rendered obsolete or sucked overseas. The bitty freelancing stuff I do is too small for the big firms to bother with, I guess, and requires too much human contact to be automated away. Other tasks require more local skills. Know a lass who made good money doing English conversation classes with Chinese businesspeople on Skype - whole line of work that didn't exist 15 years ago. As a top-up income I reckon it actually takes fewer skills than evening bar-work (I'd struggle not to spill the drinks, but consider the people skills and situational awareness that job can demand of you) yet the pay is more than double. And the best-educated kid in Delhi can't claim he has a native British accent. So strikes me, for now at least, the opportunities are still right out there - percolating through your household via wi-fi this very moment, just waiting for people to take them. Frankly, the world couldn't do more for us unless Opportunity herself jumped up and down in our faces, waving a flag that said HERE FOR YOU TO TAKE. For me that's what makes the X-million man-hours expended each week on EastEnders and X-Factor such a tragedy. Yes, we all know world experts can make unfathomable amounts of money in ever-shorter spans of time ... but you don't have to be remotely near that level to benefit. You need perseverance, and to put in a few hard yards, and to keep an eye out for what's available to you, but it really doesn't take a genius-level IQ or PhD levels of study. With all the resources available to us, you can learn a hell of a lot in a few hundred hours. <a href="https://www.thinkbox.tv/News-and-opinion/Newsroom/10032016-New-figures-put-TV-viewing-in-perspective" rel="nofollow">The average Brit watches something like 1400 hours of TV per year.</a> Opportunity costs. For me, it was always prize enough to learn about the wonders of the natural and cultural world around us, and the technological world that thousands of great minds had built and we all now inhabit ... but I was weird like that. Now there's an added incentive: <i>learn this stuff, plug into the world around you, feed the unmet hungers of its inhabitants, grasp the most golden opportunities ever bestowed upon mankind - without leaving your living room, if you so prefer - and take your rich pecuniary reward</i>. But nope, say the people, we'd still rather sit around watching (in ever-decreasing numbers, thank goodness) Simon Smug-Faced Cowell. Gaze upon him, behold his smirk as he takes his millions from us all! Different strokes for different folks, and if they have a different utility function to me then so be it, but people are bloody weird like that. I agree with BIND that self-learning isn't for everyone. Frankly there are always going to be a few people who will never have the ability to provide something people are willing to pay for while earning the minimum wage, but they are a minority and that's what the welfare system is for. For most people, though, the opportunities are better than at any point in history.
              Visual/UX Designer - EXL - Jersey City, NJ   
    Our Analytics practice works with clients and internal teams to build innovative products that have machine learning and advanced technologies at their core....
    From EXL - Tue, 18 Apr 2017 00:09:55 GMT - View all Jersey City, NJ jobs
              Full Stack Software Developer - EXL - Jersey City, NJ   
    Interest in Machine Learning. EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and...
    From EXL - Tue, 18 Apr 2017 00:09:55 GMT - View all Jersey City, NJ jobs
              Machine Learning Scientist/Senior Scientist - EXL - Jersey City, NJ   
    Machine Learning Scientist/Senior Scientist. EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and...
    From EXL - Tue, 18 Apr 2017 00:09:53 GMT - View all Jersey City, NJ jobs
              Quality Assurance Analyst - Bilingual English/German - A9.com - Palo Alto, CA   
    Experience using Internet search engines for business or personal use. We tackle complex problems in computer vision, image recognition, machine learning, and...
    From A9.com - Wed, 05 Apr 2017 21:58:33 GMT - View all Palo Alto, CA jobs
              Google Cloud Next fa tappa a Milano   
    Un evento organizzato da bigG, che ha toccato il capoluogo lombardo, dove si è discusso di cloud, machine learning, analisi dei dati, sicurezza e sviluppo.
              Machine Learning in Clojure - part 2   
    I am trying to implement the material from the Machine Learning course on Coursera in Clojure.

    My last post was about doing linear regression with 1 variable. This post will show that the same process works for multiple variables, and then explain why we represent the problem with matrices.

    The only code in this post is calling the functions introduced in the last one. I also use the same examples, so post this will make a lot more sense if you read that one first.

    For reference, here is the linear regression function:

    (defn linear-regression [x Y a i]
    (let [m (first (cl/size Y))
    X (add-ones x)]
    (loop [Theta (cl/zeros 1 (second (cl/size X))) i i]
    (if (zero? i)
    Theta
    (let [ans (cl/* X (cl/t Theta))
    diffs (cl/- ans Y)
    dx (cl/* (cl/t diffs) X)
    adjust-x (cl/* dx (/ a m))]
    (recur (cl/- Theta adjust-x)
    (dec i)))))))


    Because the regression function works with matrices, it does not need any changes to run a regression over multiple variables.

    Some Examples

    In the English Premier League, a team gets 3 points for a win, and 1 point for a draw. Trying to find a relationship between wins and points gets close to the answer.

    (->> (get-matrices [:win] :pts)
    reg-epl
    (print-results "wins->points"))

    ** wins->points **
    A 1x2 matrix
    -------------
    1.24e+01 2.82e+00


    When we add a second variable, the number of draws, we get close enough to ascribe the difference to rounding error.

    (->> (get-matrices [:win :draw] :pts)
    reg-epl
    (print-results "wins+draws->points"))

    ** wins+draws->points **
    A 1x3 matrix
    -------------
    -2.72e-01 3.01e+00 1.01e+00

    In the last post, I asserted that scoring goals was the key to success in soccer.

    (->> (get-matrices [:for] :pts)
    reg-epl
    (print-results "for->points"))


    ** for->points **
    A 1x2 matrix
    -------------
    2.73e+00 9.81e-01

    If you saw Costa Rica in the World Cup, you know that defense counts for a lot too. Looking at both goals for and against can give a broader picture.

    (->> (get-matrices [:for :against] :pts)
    reg-epl
    (print-results "for-against->pts"))


    ** for-against->pts **
    A 1x3 matrix
    -------------
    3.83e+01 7.66e-01 -4.97e-01


    The league tables contain 20 fields of data, and the code works for any number of variables. Will adding more features (variables) make for a better model?

    We can expand the model to include whether the goals were scored at home or away.

    (->> (get-matrices [:for-h :for-a :against-h :against-a] :pts)
    reg-epl
    (print-results "forh-fora-againsth-againsta->pts"))


    ** forh-fora-againsth-againsta->pts **
    A 1x5 matrix
    -------------
    3.81e+01 7.22e-01 8.26e-01 -5.99e-01 -4.17e-01

    The statistical relationship we have found suggests that that goals scored on the road are with .1 points more than those scored at home. The difference in goals allowed is even greater; they cost .6 points at home and only .4 on the road.

    Wins and draws are worth the same number of points, no matter where the game takes place, so what is going on?

    In many sports there is a “home field advantage”, and this is certainly true in soccer. A team that is strong on the road is probably a really strong team, so the relationship we have found may indeed be accurate.

    Adding more features indiscriminately can lead to confusion.

    (->> (get-matrices [:for :against :played :gd :for-h :for-a] :pts)
    reg-epl
    (map *)
    (print-results "kitchen sink”))

    ** kitchen sink **
    (0.03515239958218979 0.17500425607459014 -0.22696465757628984 1.3357911841232217 0.4019689136508527 0.014497060396707949 0.1605071956778842)


    When I printed out this result the first time, the parameter representing the number of games played displayed as a decimal point with no digit before or after. Multiplying each term by 1 got the numbers to appear. Weird.

    The :gd stands for “goal difference” it is the difference between the number of goals that a team scores and the number they give up. Because we are also pulling for and against, this is a redundant piece of information. Pulling home and away goals for makes the combined goals-for column redundant as well.

    All of the teams in the sample played the same number of games, so that variable should not have influenced the model. Looking at the values, our model says that playing a game is worth 1.3 points, and this is more important than all of the other factors combined. Adding that piece of data removed information.

    Let’s look at one more model with redundant data. Lets look at goals for, against and the goal difference, which is just the difference of the two.

    (->> (get-matrices [:for :against :gd] :pts)
    reg-epl
    (print-results "for-against-gd->pts"))

    ** for-against-gd->pts **
    A 1x4 matrix
    -------------
    3.83e+01 3.45e-01 -7.57e-02 4.21e-01


    points = 38.3 + 0.345 * goals-for - 0.0757 * goals-against + 0.421 * goal-difference

    The first term, Theta[0] is right around 38. If a team neither scores nor allows any goals during a season, they will draw all of their matches, earning 38 points. I didn’t notice that the leading term was 38 in all of the cases that included both goals for and against until I wrote this model without the exponents.

    Is this model better or worse than the one that looks at goals for and goals against, without goal difference. I can’t decide.

    Why Matrices?

    Each of our training examples have a series of X values, and one corresponding Y value. Our dataset contains 380 examples (20 teams * 19 seasons).
    Our process is to make a guess as to the proper value for each parameter to multiply the X values by and compare the results in each case to the Y value. We use the differences between the product of our guesses, and the real life values to improve our guesses.

    This could be done with a loop. With m examples and n features we could do something like

    for i = 1 to m 
    guess = 0
    for j = 1 to n
    guess = guess + X[i, j] * Theta[j]
    end for j
    difference[i] = guess - Y
    end for i

    We would need another loop to calculate the new values for Theta.

    Matrices have operations defined that replace the above loops. When we multiply the X matrix by the Theta vector, for each row of X, we multiply each element by the corresponding element in Theta, and add the products together to get the first element of the result.

    Matrix subtraction requires two matrices that are the same size. The result of subtraction is a new matrix that is the same size, where each element is the difference of the corresponding elements in the original matrices.

    Using these two operations, we can replace the loops above with

    Guess = X * Theta
    Difference = Guess - Y

    Clearly the notation is shorter. The other advantage is that there are matrix libraries that are able to do these operations much more efficiently than can be done with loops.

    There are two more operations that our needed in the linear regression calculations. One is multiplying matrices by a single number, called a scalar. When multiplying a matrix by a number, multiply each element by that number. [1 2 3] * 3 = [3 6 9].

    The other operation we perform is called a transpose. Transposing a matrix turns all of its rows into columns, and its columns into rows. In our examples, the size of X is m by n, and the size of Theta is 1 x n. We don’t have any way to multiply an m by n matrix and a 1 by n matrix, but we can multiply a m by n matrix and an n by 1 matrix. The product will be an m by 1 matrix.

    In the regression function there are a couple of transposes to make the dimensions line up. That is the meaning of the cl/t expression. cl is an alias for the Clatrix matrix library.

    Even though we replaced a couple of calculations that could have been done in loops with matrix calculations, we are still performing these calculations in a series of iterations. There is a technique for calculating linear regression without the iterative process called Normal Equation.

    I am not going to discuss normal equation for two reasons. First, I don’t understand the mathematics. Second the process we use, Gradient Descent, can be used with other types of machine learning techniques, and normal equation cannot.

              Linear Regression in Clojure, Part I   
    Several months ago I recommended the Machine Learning course from Coursera. At the time, I intended to retake the course and try to implement the solutions to the homework in Clojure. Unfortunately, I got involved in some other things, and wasn’t able to spend time on the class. 

    Recently, a new book has come out, Clojure for Machine Learning. I am only a couple of chapters in, but it has already been a good help to me. I do agree with this review that the book is neither a good first Clojure book, or a good first machine learning resource, but it does join the two topics well.

    Linear Regression
    The place to start with machine learning is Linear Regression with one variable. The goal is to come up with an equation in the familiar form of y = mx + b, where x is the value you know and y is the value you are trying to predict. 

    Linear regression is a supervised learning technique. This means that for each of the examples used to create the model the correct answer is known. 

    We will use slightly different notation to represent the function we are trying to find. In place of b we will put Theta[0] and in place of m we will put Theta[1]. The reason for this, is that we are going to be using a generalized technique that will work for any number of variables, and the result of our model will be a vector called Theta. 

    Even though our technique will work for multiple variables, we will focus on predicting based on a single variable. This is conceptually a little simpler, but more importantly it allows us to plot the input data and our results, so we can see what we are doing.

    The Question
    A number of years ago I read the book Moneyball, which is about the application of statistics to baseball. One of the claims in the book is that the best predictor for the number of games a baseball team wins in a season is the number of runs they score that season. To improve their results, teams should focus on strategies that maximize runs.

    The question I want to answer is whether the same is true in soccer: Are the number of points a team earns in a season correlated with the number of goals that they score. For any that don’t know, a soccer team is awarded 3 points for a win and 1 point for a tie.

    The importance of goals is a relevant question for a Manchester United fan. At the end of the 2012-13 season, head coach Sir Alex Ferguson retired after winning his 13th Premier League title. He was replaced by David Moyes. Under Moyes the offense which had been so potent the year before looked clumsy. Also, the team seemed unlucky, giving up goals late in games, turning wins into draws and draws into defeats. The team that finished 1st the year before finished 7th in 2013-14. Was the problem a bad strategy, or bad luck?

    The Data
    I have downloaded the league tables for the last 19 years of the English Premier League from stato.com. There have actually been 22 seasons in the Premier League, but in the first 3 seasons each team played 42 games, vs 38 games for the last 19 seasons, and I opted for consistency over quantity.

    I actually want to run 3 regressions, first one on a case where I am sure there is a correlation, then on a case where I am sure there is not, and then finally to determine whether a correlation exists between goals and points. 

    There should be a high correlation between the number of wins a team has and their number of points. Since every team plays the same number of games, there should be no correlation between the number of games played and a teams position in the standings.

    The Process
    We will use a technique called gradient descent to find the equation we want to use for our predictions. We will start with an arbitrary value for Theta[0] and Theta[1]; setting both to 0. We will multiply each x value by Theta[1] and add Theta[0], and compare that result to the corresponding value of Y. We will use the differences between Y and the results of Theata * X to calculate new values for Theta, and repeat the process.

    One way of measuring the quality of the prediction is with a cost function that measures the mean square error of the predictions. 

    1/2m * sum(h(x[i]) - y[i])^2

    Where m is the number of test cases we are evaluating, and h(x[i]) is the predicted value for a test case i. We will not use the cost function directly, but its derivative is used in improving our predictions of Theta as follows:

    Theta[0] = Theta[0] - alpha * 1/m * sum(h(x[i]) - y([i])
    Theta[1] = Theta[1] - alpha * 1/m * sum((h(x[i]) - y([i])  * x[i]) 

    We have added one more symbol here. alpha is called the learning rate. The learning rate determines how much we modify Theta each iteration. If alpha is set too high, the process will oscillate between Thetas that are too low and two high and the process will never converge. When alpha is set lower than necessary, extra iterations are necessary to converge.

    I need to mention again that this methodology and these equations come directly from Professor Ng’s machine learning course on Coursera that I linked above. He spends over an hour on linear regression with one variable, and if you want more information that is the place to go.

    The Code
    The actual calculations we are going to do are operations on matrices. When we multiply the matrix X by the matrix Theta, we obtain a matrix of predictions that can be compared element by element with the matrix Y. The same results could be obtained by looping over each test case, but expressing the computations as matrix operations yields simpler equations, shorter code and better performance.

    I used the clatrix matrix library for the calculations.

    One other thing to note, in the equations above, Theta[0] is treated differently than Theta[1], it is not multiplied by any x terms, either in the predictions or in the adjustments after the predictions. If we add an additional column to our X matrix, an X[0], and make all of the values in this column 1, we then no longer have to make a distinction between Theta[0] and Theta[1].

    (defn add-ones "Add an X[0] column of all 1's to use with Theta[0]"
      [x]
      (let [width (first (cl/size x))
            new-row (vec (repeat width 1))
            new-mat (cl/matrix new-row)]
        (cl/hstack new-mat x)))

    (defn linear-regression [x Y a i]
      (let [m (first (cl/size Y))
            X (add-ones x)]
        (loop [Theta (cl/zeros 1 (second (cl/size X))) i i]
          (if (zero? i)
            Theta
            (let [ans (cl/* X (cl/t Theta))
                  diffs (cl/- ans Y)
                  dx (cl/* (cl/t diffs) X)
                  adjust-x (cl/* dx (/ a m))]
              (recur (cl/- Theta adjust-x)
                       (dec i)))))))

    The linear-regression function takes as parameters the X and Y values that we use for training, the learning rate and the number of iterations to perform. We add a column of ones to the passed in X values. We initialize the Theta vector, setting all the values to 0. 

    At this point X is a matrix of 380 rows and 2 columns. Theta is a matrix of 1 row and 2 columns. If we take the transpose of Theta (turn the rows into columns, and columns into rows) we get a new matrix, Theta’ which has 2 rows and 1 columns. Multiplying the matrix X with Theta’ yields a matrix of 380x1 containing all of the predictions, and the same size as Y.  

    Taking the difference between the calculated answers and our known values yields a 380x1 matrix. We transpose this matrix, making it 1x380, and multiply it by our 380x2 X matrix, yielding a 1x2 matrix. We multiply each element in this matrix by a and divide by m, ending up with a 1x2 matrix which has the amounts we want to subtract from Theta, which is also a 1x2 matrix. All that is left to do is recur with the new values for Theta.

    The Results
    Since I am going to be performing the same operations on three different data sets, I wrote a couple of helper functions. plot-it uses Incanter to display a scatter plot of the data. reg-epl calls the linear-regression function specifying a learning rate of .0001 and 1000000 iterations. I also have a get-matrices function, which downloads the data and creates the X and Y matrices for the specified fields.

    (def wins (get-matrices [:win] :pts))
    (plot-it wins)
    (def win-theta (reg-epl wins))
    (println "Wins-points: " win-theta)

    Yields this graph



    and these results

    Wins-points:   A 1x2 matrix
     -------------
     1.24e+01  2.82e+00

    The relationship between wins and points is obvious in the graph. The equation we developed estimates wins as being worth 2.82 points, instead of the correct 3. This is because it had no way to account for draws, and use a high intercept to get those extra points in there.

    A team with 0 wins would be expected to have 12.4 points. A team with 10 wins would have 12.4 + 2.82 * 10 = 40.6 points. A team with 20 wins would have 12.4 + 2.82 * 25 = 
    82.9 points.

    (def played (get-matrices [:played] :rank))
    (plot-it played)
    (def played-theta (reg-epl played))
    (println "played-rank: " played-theta)
    (println "expected finish:" (+ (first played-theta)
                                   (* 38 (second played-theta))))

    Playing 38 games gives you an equal chance of having a finishing position anywhere between 1 and 20. The graph gives a good illustration of what no-correlation looks like.



    If we use the terms in Theta to find the expected finishing position for a team playing 38 games, we find exactly what we expect, 10.5.

    played-rank:   A 1x2 matrix
     -------------
     7.27e-03  2.76e-01

    expected finish: 10.499999999999996

    Ok, now that we have seen what it looks like when we have a strong correlation, and no correlation, is there a correlation between goals and points?

    (def goals (get-matrices [:for] :pts))
    (plot-it goals)
    (def goal-theta (reg-epl goals))
    (def goal-lm (find-lm goals))
    (println "goals-points: " goal-theta)
    (println "goals-points (incanter): " goal-lm)

    Looking at the graph, while not quite as sharp as the goals-points graph, it definitely looks like scoring more goals earns you more points.



    To double check my function, I also used Incanter’s linear-model function to also generate an intercept and slope. (And yes, I am relieved that they match).

    goals-points:   A 1x2 matrix
     -------------
     2.73e+00  9.81e-01

    goals-points (incanter):  [2.7320304686089685 0.9806635460888629]

    We can superimpose the line from our regression formula on the graph, to see how they fit together.

    (def goal-plot (scatter-plot (first goals) (second goals)))
    (defn plot-fn [x]
      (+ (* (second goal-theta) x) (first goal-theta)))
    (def plot-with-regression (add-function goal-plot plot-fn 0 100))

    (view plot-with-regression)



    The Answer
    We can calculate how many points we would expect the team to earn based on their 86 goals in 2012-13 and 64 goals in 2013-14.

    (println "86 goals = " (+ (first goal-theta)
                              (* (second goal-theta) 86)))

    (println "64 goals = " (+ (first goal-theta)
                              (* (second goal-theta) 64)))

    86 goals =  87.07011197597255
    64 goals =  65.49481001604704

    In the last year under Sir Alex, Manchester United earned 89 points, 2 more than the formula predicts. In their year under David Moyes, they earned 64 points, 1.5 less than the formula predicts. 

    Of the 25 point decline in Manchester United’s results, 21.5 points can be attributed to the failure of the offense under Moyes, and 3.5 points can be attributed to bad luck or other factors. 

    Manchester United’s attacking style isn’t just fun to watch, it is also the reason they win so much. Hopefully the team’s owners have learned that lesson, and will stick to attack minded managers in the future.

    You can find all of the code for the project on github.
              ML Class Notes: Lesson 1 - Introduction   

    I am taking the Machine Learning class at Coursera. These are my notes on the material presented by Professor Ng.

    The first lesson introduces a number of concepts in machine learning. There is no code to show until the first algorithm is introduced in the next lesson.

    Machine learning grew out of AI research. It is a field of study that gives computers the ability to learn algorithms and processes that can not be explicitly programmed. Computers could be programmed to do simple things, but doing more complicated things required the computer learn itself. A well posed learning program is said to learn some task if its performance improves with experience.

    Machine Learning is used for a lot of things including data mining in business, biology and engineering; performing tasks that can't be programmed by hand like piloting helicopters or computer vision; self-customizing programs like product recommendations; and as a model to try to understand human learning.

    Two of the more common categories of machine learning algorithms are supervised and unsupervised learning. Other categories include reinforcement learning and recommender systems, but they were not described in this lesson.

    Supervised Learning

    In supervised learning the computer taught to make predictions using a set of examples where the historical result is already known. One type of supervised learning tasks is regression where the predicted value is in a continuous range (the example given was predicting home prices). Other supervised learning algorithms perform classification where examples are sorted into two or more buckets (the examples given were of email, which can be spam or not spam; and tumor diagnosis which could be malignant or benign.)

    Unsupervised Learning

    In unsupervised learning, the computer must teach itself to perform a task because the "correct" answer is not known. A common supervised learning task is clustering. Clustering is used to group data points into different categories based on their similarity to each other. Professor Ng gave the the example of Google News, which groups related news articles, allowing you to select accounts of the same event from different news sources.

    The unsupervised learning discussion ended with a demonstration of an algorithm that had been used to solve the "cocktail party problem", where two people were speaking at the same time in the same room, and were recorded by two microphones in different parts of the room. The clustering algorithm was used to determine which sound signals were from each speaker. In the initial recordings, both speakers could be heard on both microphones. In the sound files produced by the learning algorithm, each output has the sound from one speaker, with the other speaker almost entirely absent.


              Take the Machine Learning Class at Coursera   

    Coursera is offering its Machine Learning course again, beginning March 8, and I highly recommend it. You already know the obvious, that it is a course on an incredibly timely career skill and it is free, but until you take the course you can't know just how good the course really is.

    You will learn how to write algorithms to perform linear regression, logistic regression, neural networks, clustering and dimensionality reduction. Throughout the course Professor Ng explains the techniques that are used to prepare data for analysis, why particular techniques are used, and how to determine which techniques are most useful for a particular problem.

    In addition to the explanation of what and why, there is an equal amount of explaining how. The 'how' is math, specifically linear algebra. From the first week to the last, Ng clearly explains the mathematical techniques and equations that apply to each problem, how the equations are represented with linear algebra, and how to implement each calculation in Octave or Matlab.

    The course has homework. Each week, there is a zip file that contains a number of incomplete matlab files that provide the structure for the problem to be solved, and you need to implement the techniques from the week's lessons. Each assignment includes a submission script that is run from the command line. You submit your solution, and it either congratulates you for getting the right answer, or informs you if your solution was incorrect.

    It is possible to view all of the lectures without signing up for the class. Don't do that. Sign up for the class. Actually signing up for the class gives you a schedule to keep to. It also allows you to get your homework checked. When you watch the lectures, you will think you understand the material; until you have done the homework you really don't. As good as the teaching is, the material is still rigorous enough that it will be hard to complete if you are not trying to keep to a schedule. Also, if you complete the course successfully, you will be able to put it on your resume and LinkedIn profile.

    You have the time. When I took the class, there was extra time built in to the schedule to allow people who started the course late to stay on pace. Even if you fall behind, the penalty for late submission is low enough that it is possible to complete every assignment late and still get a passing grade in the course.

    I am going to take the course again. I want to make review the material. I also want to try to implement the homework solutions in Clojure, in addition to Octave. I will be posting regularly about my progress.

    You may also be able to find a study group in your area. I decided to retake the course when I found out that there was going to be a meetup group in my area. Even without a local group, the discussion forums are a great source of help throughout the class. The teaching assistants and your classmates provide a lot of guidance when you need it.


              Learning functional programming at Coursera   

    I am currently taking Martin Odersky's course Functional Programming Principles in Scala on Coursera. This is my first time taking a course from Coursera. At the same time I signed up for this course, I also signed up for a course on Reactive Programming that Odersky will be teaching with Erik Meijer and Roland Kuhn beginning November 4.

    There are hundreds of courses available on all sorts of subjects like humanities, science, engineering, and of course computer science, and all are free. In addition to the Scala course, I have started taking a machine learning course. Its format is the same as the Scala course, so I am going to assume the format is standard. (The machine learning course was the class that launched Coursera, which is another reason to think it is the standard.)

    Each week new video lectures are posted. Lectures are typically 10 to 15 minutes long, and the total amount of material each week is 1.5 to 2 hours. There has been a programming assignment each of the first 4 weeks. An extra week was provided for the 4th assignment, and after watching the week 5 lectures, it was clear that the assignment covered material from both weeks.

    After completing each assignment, it is submitted by using a 'submit' command in Scala's Simple Build Tool. After a couple of minutes, you can go to the assignment page on the course website and see your grade. 80% of each grade comes from passing automated tests, and 20% comes from a style checker, which will take points for using mutable state or nulls. You can submit each assignment up to 5 times with only the highest score being counted. (After that you can continue to submit the assignment, and you will receive the same feedback on your work, but you will not get credit for it.) You need to achieve an average of 70% on the homework to receive a certificate of completion for the course.

    I really enjoy the format of the lectures. Some of the time Odersky is talking in front of the camera, but most of the time there are slides up on the screen. He is able to write on the slide. The translucent image of his head as he leans over a slide, or his hand and pen as he writes is a really minor feature that somehow makes the video more interesting to watch. From time to time, the video is paused while a question appears on the screen. Some questions are multiple choice and you submit an answer before moving on. Others are open ended (how would you right a function that…) and you are left to try it on your own, but there is nothing to submit before you hit continue. Odersky then proceeds to provide a complete explanation of the solution.

    The quality of the teaching is excellent. The course builds a foundation by teaching the substitution method of function evaluation (which if I had learned before, I have forgotten it), then moves on to recursion, higher order functions and currying. Because Scala is a hybrid functional/object oriented language, there has also been a lot of discussion of object hierarchies and Scala's type system. Pattern matching, tuples and lists have also been covered.

    I have found all of the assignments to be challenging. The format is great. You download a zip file that contains a skeleton for the solution and a series of test cases. The tests don't cover the whole assignment but they provide a good start, and give guidance on how to write additional tests. The first week I spent a lot of time, because I decided to read Scala for the impatient until I knew enough syntax to solve the problem. (It would have been faster if lists had been covered before chapter 13). After that, I would estimate that I have spent 6 or 7 hours per week on the assignments.

    I believe that I am learning the material better through the course than I would reading a book. I have a tendency when reading a book to skim parts that don't interest me as much, or somehow I think aren't relevant to things I am likely to do. Also, the graded homework mean that I have to stick to a problem until I get it right, rather than until I think I know what I am doing.

    I did have a little apprehension at first because the course assumes that you are going to be working with Eclipse, which I have just never really gotten the feel for. I remembered setting up Scala, SBT and Eclipse to be challenging. The course provided clear written instructions and video instructions for installing all of the necessary tools, with all of the appropriate download links.

    The workload is not trivial, but I highly recommend taking classes at Coursera. The teaching is excellent. The variety of courses is amazing. I am very grateful to them for making such wonderful resources available for free.


              Facebook 'bots' write own language, start communicating sans humans   
    Using machine learning algorithms, the "dialog agents" were left to converse freely in an attempt to strengthen their conversational skills.
              Sr Software Engineer ( Big Data, NoSQL, distributed systems ) - Stride Search - Los Altos, CA   
    Experience with text search platforms, machine learning platforms. Mastery over Linux system internals, ability to troubleshoot performance problems using tools...
    From Stride Search - Tue, 04 Apr 2017 06:25:16 GMT - View all Los Altos, CA jobs
              Network Engineer - Daimler - Sunnyvale, CA   
    MBRDNA is headquartered in Silicon Valley, California, with key areas of Advanced Interaction Design, Digital User Experience, Machine Learning, Autonomous...
    From Daimler - Thu, 13 Apr 2017 05:42:50 GMT - View all Sunnyvale, CA jobs
              Senior Software Engineer - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Sat, 11 Mar 2017 00:47:45 GMT - View all New York, NY jobs
              Software Dev Engineer -- Ad Platform - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Wed, 08 Mar 2017 06:39:18 GMT - View all New York, NY jobs
              Business Continuity / Disaster Recovery Architect - Neiman Marcus - Dallas, TX   
    Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus....
    From Neiman Marcus - Thu, 25 May 2017 22:30:52 GMT - View all Dallas, TX jobs
              AI Trying To Design Inspirational Posters Goes Horribly And Hilariously Wrong   

    Whenever an artificial intelligence (AI) does something well, we’re simultaneously impressed as we are worried. AlphaGO is a great example of this: a machine learning system that is better than any human at one of the world’s most complex games. Or what about Google’s neural networks that are able to create their own AIs autonomously?...

    The post AI Trying To Design Inspirational Posters Goes Horribly And Hilariously Wrong appeared first on Breaking News, Sports, Entertainment.


              Data Scientist - Machine Learning, Python   

              Bir Cesaret Örneği: Rhodeus Script ve Talha Zekeriya Durmuş yazısına Emine tarafından yapılan yorumlar   
    Umarım başarılarının devamı gelir Talha'nın. Yaptığı gerçekten büyük cesaaret isteyen bir şey. Tübitak jürisi gerçekten saçmalayabiliyor. Biz başka bir yarışmaya machine learning kullandığımız bir projeyle katılmıştık ve yaptığımız bir tanım üzerine saçma sapan bir soru sordular. Böyle aksilikler başına gelmez inş.
              Clutter Coming To Office 365 By Default Starting June   

    Clutter is coming to improve your inbox experience in Office 365. Microsoft launched their new tool towards the end of last year in order to streamline the email experience on its platform. This new addition made it easier to focus on the email messages that matter the most, while moving the less important ones into a separate folder — quite similar to how other de-cluttering tools work on competing email service providers like Gmail and Yahoo. Machine learning at its finest. And now after having sorted over one million emails, and saving users an average of 82 minutes a day,

    The post Clutter Coming To Office 365 By Default Starting June appeared first on .


              Nutanix and Google Cloud team up to simplify hybrid cloud   

    Today, we’re announcing a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises. We often hear from our customers that they’re looking for solutions to deploy workloads on premises and in the public cloud.

    Benefits of a hybrid cloud approach include the ability to run applications and services, either as connected or disconnected, across clouds. Many customers are adopting hybrid cloud strategies so that their developer teams can release software quickly and target the best cloud environment for their application. However, applications that span both infrastructures can introduce challenges. Examples include difficulty migrating workloads such as dev-testing that need portability and managing across different virtualization and infrastructure environments.

    Instead of taking a single approach to these challenges, we prefer to collaborate with partners and meet customers where they are. We're working with Nutanix on several initiatives, including:

    • Easing hybrid operations by automating provisioning and lifecycle management of applications across Nutanix and Google Cloud Platform (GCP) using the Nutanix Calm solution. This provides a single control plane to enable workload management across a hybrid cloud environment.

    • Bringing Nutanix Xi Cloud Services to GCP. This new hybrid cloud offering will let enterprise customers leverage services such as Disaster Recovery to effortlessly extend their on-premise datacenter environments into the cloud.

    • Enabling Nutanix Enterprise Cloud OS support for hybrid Kubernetes environments running Google Container Engine in the cloud and a Kubernetes cluster on Nutanix on-premises. Through this, customers will be able to deploy portable application blueprints that target both an on-premises Nutanix footprint as well as GCP.

    In addition, we’re also collaborating on IoT edge computing use-cases. For example, customers training TensorFlow machine learning models in the cloud can run them on the edge on Nutanix and analyze the processed data on GCP.

    We’re excited about this partnership as it addresses some of the key challenges faced by enterprises running hybrid clouds. Both Google and Nutanix are looking forward to making our products work together and to the experience we'll deliver together for our customers.


              I/O 2017: Invented w/ Others   
    Watch live!

    I am so excited the time has come for Google I/O to kick off for the second year in our backyard, with thousands of developers coming to join hundreds of Google engineers as we geek out on tech in a festival atmosphere. I have already been at several pre-events, and what blows me away the most about conferences like this is how international they are. With all that is going on in the world, it feels great to come together as one. An immense amount of work goes into putting on the event, crafting great content for developers, and of course building the products that come to life. This isn’t just about the products that we build at Google, but a ton of partners work incredibly hard to get things ready to showcase platforms and ecosystems.

    As I reflect on what I am most proud about this year, one theme is exactly this: working together with partners and ecosystems. We are fortunate enough to have a few large and mature platforms out there such as Android and the Web, and each is still pushing the boundaries and innovating. We also have new platforms and extensions that tie together our own services as well as these ecosystems. I have seen an increased effort to unifying and working together where it makes sense. For example, running Android apps on ChromeOS, running Web apps as first class experiences via PWA, having immersive reality (VR and AR) come to various platforms, and glueing things together with IoT and Actions on Google.

    Computing is unbundling, and individual components have more wiggle room and ability to connect and compose with each other. These connections are supported by empowering glue such as Cloud Functions, and machine learning is there to make experiences work optimally for uses. When mobile first hit, much of the UX hit was around coming up with the right interaction on the small device and how it tied into the context it now had about you (from your location to your contacts). Now, the best experiences take that attention to detail, and they marry it with smarter services. Google Photos is a canonical example that has a very nice UI for sure, but the magic is in how I can search for [my son in green when he was four] and get results. Slack has done a great job with their UI, but it was their search functionality that originally won me over from Campfire. It is becoming a given that you should be thinking about how you can best use data to up level your experiences. We have a lot of talks on TensorFlow from beginner to advanced, but we also have many APIs that do the work for you (e.g. vision APIs). You don’t have to take a linear algebra course again to get started (but I recommend this fun way of doing so!).

    Not Just Invented Here

    I am really excited to hear how Android developers react to the Kotlin announcement today. I have heard developers ask about our support for such as long time, I am really excited to let them know that we are standing by it. I first used Kotlin several years back when I was frustrated at the level of Java language support, and it is a fantastic modern language. Since then the community has grown, and it has broadened its targets. I am so happy that we have embraced this rather than trying to do something new for the sake of it. We have a lot of great information on how to get started.

    Beyond Kotlin, we have great new improvements in the tools and SDKs. The new architecture components such as the lifecycle management helpers are going to save so much time (and frustration). Chet, Tor, and Romain are going to have a whale of a time on stage this year showing off all the goodness. This has been the best developer-focused Android release in awhile, and this is just the start for Kotlin and these developer tools.

    All boats rising

    The Web has always been about community and shared evolution. It is the democracy that, yes requires compromise and working together, but results in shared change that doesn’t give too much power to a particular entity. This year we see the Web innovate faster than ever, resulting in great new experiences such as Twitter’s new PWA that comes in at a tiny bundle to get going quickly and picks up steam from there. And then there is the amazing Wego experience. The story behind that is particularly fun, as an app developer picked up Polymer (2.0 just released!) and two months later had it up and running.

    We have new tools to help you take your web experience to the next level. Lighthouse 2.0 was announced and now comes baked into Chrome DevTools, and Workbox takes our battle tested sw-toolbox and sw-precache and packages it in a nicer bundle that lets you pick and choose what you want to bring service workers to your app. But again, it isn’t about what we are doing. Microsoft talked about their support for PWAs at Build last week, and other browser vendors are working with us to support the latest and greatest as soon as possible. Outside of the browsers, the framework and app developers are busy working on how to optimize for the constraints and opportunities of mobile, whilst also getting their support to desktop and other (sometimes surprising!) form factors. The Web continues to be about reaching out to all of your users and meeting them where they are.

    We built a fun Google I/O action!

    Reaching the full bundle

    As the mature platforms continue to push the bar, we are seeing other form factors come to life too. Whether it be Actions on Google that can reach users through their Google Home, phones (and more!) giving you multi-modal access to services at the flick of a voice or text gesture, immersive new VR and AR experiences, or Android Things packaging IoT in a manner that makes it incredibly approachable and powerful.

    Bringing it together

    This year we brought Fabric together with Firebase, and today we are open sourcing more of the product (with more coming!) right as we add new functionality across the platform, including large new initiatives such as Firebase Performance.

    What I love about Firebase is how it brings you the best tools to help you build your applications, all packed with a top notch API console and SDKs that work together.

    This is all the tip of the iceberg. Unifying the unbundle, together is the cheesiest thing I have written in some time, but that is what I see when I look at where things are coming together this year. All platforms innovating quickly, but coming together where it makes sense to solve problems.

    All the prep is done, now the fun part…. getting to meet old friends and new at Google I/O.

    If you can’t be here in person or at an I/O Extended event, please tune in, and we are bringing more Google Developer Days to you this year!

    “platforms innovate
    together computing unbundles
    and then we all unify.” — Stephen Colbert

    I/O 2017: Invented w/ Others was originally published in Ben and Dion on Medium, where people are continuing the conversation by highlighting and responding to this story.


              Data Engineer with Scala/Spark and Java - Comtech LLC - San Jose, CA   
    Job Description Primary Skills: Big Data experience 8+ years exp in Java, Python and Scala With Spark and Machine Learning (3+) Data mining, Data analysis
    From Comtech LLC - Fri, 23 Jun 2017 03:10:08 GMT - View all San Jose, CA jobs
              SOFTWARE ENGINEER II - Microsoft - Redmond, WA   
    Product engineering experience with OS components Business Analyst or Machine Learning experience. Foundational promise to be the most secure collection of...
    From Microsoft - Sat, 25 Mar 2017 02:47:33 GMT - View all Redmond, WA jobs
              Data Scientist - Wink - New York, NY   
    Hands-on experience with supervised and unsupervised machine learning algorithms for regression, classification, and clustering....
    From Wink - Thu, 18 May 2017 06:17:27 GMT - View all New York, NY jobs
              How AI, IoT and Blockchain Will Shake Up Procurement and Supply Chains    
    The next BriefingsDirect digital business thought leadership panel discussion focuses on how artificial intelligence (AI), the Internet of things (IoT), machine learning (ML), and blockchain will...

    Learn more about BriefingsDirect, Dana Gardner's blog, and other Interarbor Solutions services by visiting www.interarbor-solutions.com.

              Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors   
    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a ‘haptic glance’). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.
              How Lastminute.com Uses Machine Learning to Improve Real-Time Travel Bookings   
    The next BriefingsDirect Voice of the Customer digital transformation case study highlights how online travel and events pioneer lastminute.com leverages big-data analytics with speed at scale to...

    Learn more about BriefingsDirect, Dana Gardner's blog, and other Interarbor Solutions services by visiting www.interarbor-solutions.com.

              Meet George Jetson – Your New AI-Empowered Chief Procurement Officer   
    The next BriefingsDirect technology innovation thought leadership discussion explores how rapid advances in artificial intelligence (AI) and machine learning are poised to reshape procurement -- like...

    Learn more about BriefingsDirect, Dana Gardner's blog, and other Interarbor Solutions services by visiting www.interarbor-solutions.com.

              Alation Centralizes Enterprise Data Knowledge by Employing Machine Learning and Crowdsourcing   
    The next BriefingsDirect Voice of the Customer big-data case study discussion focuses on the Tower of Babel problem for disparate data, and explores how Alation manages multiple data types by...

    Learn more about BriefingsDirect, Dana Gardner's blog, and other Interarbor Solutions services by visiting www.interarbor-solutions.com.

              Google Photos is making sharing pictures with friends even easier   
    TwitterFacebook

    On Wednesday, Google announced several updates to the Photos app that will make sharing selfies, your trip to Machu Picchu, and that ridiculous sign you saw on the way to work even easier.

    SEE ALSO: How to post Google Photos' awesome animations to Instagram

    These features were first announced at Google's I/O conference in May. Now we have even more information about the updates. A new feature called “suggested sharing,“ for instance, uses machine learning to automatically suggest who to share photos with based on your habits. 

    Image: google

    The app will also proactively search for photos to share by recognizing events like weddings and pre-selecting images and people. That means less endless scrolling for what to share, or who to share with. You can share directly in the app or via email or phone. Read more...

    More about Google, Photos, Sharing, Google Photos, and Tech

              Machine Learning Consultant (Remote 50% travel)   
    Solution Partners, Inc. New York, NY
              L'ordinateur gagne contre un champion de go   
    jeu_de_go.png
     

    Pour une fois que l'on parle du jeu de go dans l'actualité ! Chaque journal a repris ce titre : pour la première fois l'ordinateur gagne contre un champion de go. Cela fait écho à la victoire de l'ordinateur contre le champion du monde des échecs Garry Kasparov il y a près de vingt ans !

    Cette victoire fait la une car elle marque surement un jalon symbolique dans les progrès de l'intelligence artificielle. Derrière cette victoire il y a Google et sa société DeepMind. 

    Si vous ne connaissez pas le jeu de go, il faut comprendre qu'il y a bien plus de combinaisons au go qu'aux échecs. Un petit exemple pour illustrer la différence avec le nombre de parties possibles après 1,2 et 3 coups. 

    • premier coup , vous disposez de 20 coups aux échecs, au go c'est 361 (19x19),
    • second coup  : 400 parties (20x20) aux échecs, au go c'est 129.960 (361x360).
    • troisième coup : 8902 aux échecs, 46.655.640 pour le go (plus de 46 millions !)

    Avec les algorithmes de 'machine learning', l'ordinateur est maintenant capable de calculer le coup qu'il doit jouer à partir de l'analyse de millions de parties existantes. Ce n'est plus une approche 'force brute' où l'ordinateur essayait de calculer tous les coups possibles pour essayer de trouver le meilleur coup.

    Pour lire un article détaillé (en anglais), c'est à la une de la revue 'Nature'.

    Article wikipédia sur le jeu de go.


              Principal Program Manager - Microsoft - Redmond, WA   
    Machine Learning, R, Python. Built on a decade of internal experience running an Exabyte-scale big data platform, Azure Data Lake Analytics and Azure Data Lake...
    From Microsoft - Thu, 30 Mar 2017 21:09:06 GMT - View all Redmond, WA jobs
               How artificial intelligence is taking on ransomware    
    Machine learning can analyze samples of good and bad software and figures out what combination of factors is likely to be present in malware.
              call: VIRTUALITIES AND REALITIES, Open Fields-2 conference   
    Deadline: 7 June 2017 Call for entries VIRTUALITIES AND REALITIES The 2nd Open Fields Conference and RIXC Art Science Festival 2017 October 19–21, 2017, Riga http://festival2017.rixc.org/ DEADLINE for conference and exhibition submissions EXTENDED: June 7, 2017 VIRTUALITIES AND REALITIES is the theme of this year’s RIXC Art Science festival in Riga, Latvia, that aims to establish a space for artistic interventions and conversations about the complex implications of immersive technologies. The festival programme will include the 2nd Open Fields conference, workshops, performances and exhibitions presenting the most innovative approaches in artistic research. The festival will take place in October 19–21, 2017, in Riga’s most significant contemporary art venues – kim? Contemporary Art Centre and RIXC Gallery, as well as in the Art Academy of Latvia, and the Latvian National Museum of Art. VIRTUALITIES AND REALITIES 
Immersive technologies coupled with superior virtual environments, artificial intelligence algorithms, faster processors, and biometrics are launching a new era in virtual experiences, entertainment and story telling. At the same time these technologies have the potential for reinforcing stereotypes, contributing to massive economic and social disruptions, and implementing new systems of invasive monitoring and control.

 What do these new developments in VR/AR mean for education, entertainment, social policy, and systems of codified knowledge? Like their predecessors the telephone, television, and mobile phone, what are the impending new vistas and reduced horizons? Biometrics and the uploading and tracking of personal data spans areas from health care to advertising, with implications for law, criminal justice, entertainment (gaming), education and sports. Machine learning and […]
              Microsoft to sell Box storage to Azure customers   
    Microsoft has announced a new tie-up with Box that will extend the intelligence and reach of its Azure cloud platform. Under the terms of the deal, Box will now use Azure as a strategic cloud platform, with a new "Box on Azure" now being offered out to enterprise customers around the world. However the partnership will also see Box getting the chance to use Azure’s artificial intelligence and machine learning capabilities for the first time. This could potentially soon mean that Box customers would be able to use highly advanced tools such as advanced content processing, and voice control, to power… [Continue Reading]

              Machine Learning Consultant (Remote 50% travel)   

              Watch The Startup India Standup India Event Live, Here   
    The hugely anticipated Startup India Standup India is LIVE, and here’s how you can watch the same. Deepanshu Khandelwal Editor-at-large and co-founder at The Tech Portal. He is a tech enthusiast with interests in new-age technology fields like Ai, Machine Learning, AR/VR, Outer Space and related stuff. Drop him a mail anytime, very reachable.
              Machine Leaning Specialist​   
    CA-Santa Clara, Machine Leaning Specialist​ ​Location: Santa Clara, CA 3 - 6 month contract to hire embedded (Raspberry Pi) experience is a huge plus. Most importantly is the experience in computer vision, deep neural networks Experience developing applications utilizing Artificial Intelligence, Computer Vision, Machine Learning, Image Processing, and/or Computer Graphics Experience with mobile device management,
              Machine Learning allein reicht nicht, der menschliche Verstand ist weiterhin gefragt   

    Science-Fiction-Autoren sind seit Langem fasziniert von der Vorstellung, dass Roboter die Weltherrschaft übernehmen könnten. Und sie überlegen, was man tun könnte, um die Welt vor dieser Machtübernahme zu bewahren. Doch diese Vorstellung geht weit an der Realität vorbei. Tatsächlich wissen wir heute: Maschinen – und Machine Learning – funktionieren am [...]

    The post Machine Learning allein reicht nicht, der menschliche Verstand ist weiterhin gefragt appeared first on Mehr Wissen.


              A CES Takeaway: Don't Fear Robots And Artificial Intelligence, Fear Politicians   
    Maroon 5 keeps popping up on my Pandora stations, so artificial intelligence (AI) and machine learning still have a ways to go. Even if AI can beat us at Go. But, wow, that aside, the technologies showcased at the 2017 Consumer Electronics Show (#CES2017, actually the 50th annual, sponsored by the [...]
              Google Photos shared libraries feature is rolling out now   
    In mid-May, Google announced a new inbound feature for Google Photos called Suggested Sharing; with it, users are presented with sharing suggestions made possible via machine learning. That feature is rolling out to users this week, the company has announced, making Google Photos even easier to use; the shared libraries feature is rolling out, too. Once it arrives on your … Continue reading
              Make a collaborative drawing with Google’s neural network   
    Last April, Google’s machine learning crew revealed AutoDraw, a fun little demo of all that neural network theory. In a nutshell, the web app tries to guess what your scribble looks like and identify it. Now Google’s researchers are taking that idea one step further. Called Sketch-RNN, this “recurrent neural network” model does for doodles and drawing that autocomplete does … Continue reading
              Comment on Design engineering plots course to redress gender imbalance by Andrew Watson   
    Against a background of engineers complaining about the misuse of the title 'engineer', more often than not, the photographs that accompany your articles show 'engineers' operating tools and machinery. As a mechanical engineering graduate with 20 years of experience working in a variety of disciplines, this does not reflect my personal engineering experience and I'm certain that such activity will constitute a tiny fraction of the Design Engineering MEng course discussed. Sample phrases from the article include 'design engineering in the 21st Century', 'fundamental science through to robotics and aesthetics', 'distinct type of course that is unlike any other' and 'machine learning' yet you choose to show somebody operating a jigsaw! Come on, you need to try harder if you want ANY bright, capable, young people to consider a career in engineering, never mind raising the proportion of women entering the profession!
              Machine Learning Software Engineer - Intel - Toronto, ON   
    In order to take advantage of the many opportunities that we see in the future for FPGA's, PSG is looking for engineers to join our teams....
    From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
              Research Engineer - Machine Learning & Intelligent Systems   
    Research Engineer - Machine Learning & Intelligent Systems The mission of the lab is to conduct cutting-edge research by exploring theories and building systems. Our lab is specialized in research on the following areas: Machine Learning Data...
              (USA-NY-New York) VP HRIS   
    VP HRIS **Requisition Number:** 17\-16996 **State:** New York **City:** New York **Shift:** Not Applicable **Job Description:** Wolters Kluwer is looking for a VP of Information Technology to work out of our New York City or Riverwoods, IL office\. Wolters Kluwer \(www\.wolterskluwer\.com\) is a global leader in professional information services\. Professionals in the areas of legal, business, tax, accounting, finance, audit, risk, compliance and healthcare rely on Wolters Kluwer's market leading information\-enabled tools and software solutions to manage their business efficiently, deliver results to their clients, and succeed in an ever more dynamic world\. Headquartered in Alphen aan den Rijn, the Netherlands, we serve customers in over 180 countries, maintains operations in over 40 countries and employs 19,000 people worldwide\. Wolters Kluwer reported 2016 annual revenues of €4\.3 billion\. Wolters Kluwer combines deep domain knowledge with specialized technology\. Our portfolio offers software tools coupled with content and services that customers need to make decisions with confidence\. Every day, our customers make critical decisions to help save lives, improve the way we do business, build better judicial and regulatory systems\. We help them get it right\. The HR Systems Leader directs the strategy, programs, design, configuration, testing, security, and administration of global HR systems across the company \(employees in 46 countries\) in alignment with the HR strategy and in collaboration with the IT organization\. This leader will have significant interaction with all levels of the HR organization, the IT department, other members of executive management, and vendor partners\. The incumbent will be a leader and hands\-on practitioner, providing technical expertise across the evolving HR systems footprint\. This leader will help build and deliver a technology roadmap that balances the strategic and operational needs of HR and will work with IT to ensure systems are well governed and integrated\. The HR Systems Leader will drive the evolution and activities that facilitate global system adoption and is responsible for identifying opportunities to enhance reliability, HR team productivity, and system quality satisfaction among all stakeholders\. ESSENTIAL DUTIES & RESPONSIBILITIES: THE OPPORTUNITY The HR function at Wolters Kluwer is undergoing a significant transformation from a distributed, local organization to a global function operating at scale wherever possible while still accommodating local requirements as appropriate to support our business\. Building the infrastructure to support the organization we are becoming is the priority for this role, which also includes managing the transition along the way\. Ultimately, we will select and deploy a global, SaaS HCM and use that as the core tool to drive operational efficiency and a superior customer experience for our employees and stakeholders\. KEY RESPONSIBILITIES The incumbent for this position will play a principal role, in collaboration with key stakeholders, as the architect of our future HR infrastructure footprint including, but also beyond the core HCM platform \(i\.e\. service center/workflow technology, other technology tools, use of AI and machine learning, etc\.\)\. The work will include driving the technical track of the HR transformation agenda \(i\.e\. evaluating the marketplace; being a key voice in vendor selection; and owning the technical elements of all configuration and implementation projects, which includes planning, development, testing, implementation, customer reporting, user support, and business requirements for integrations\) TECHNOLOGY VISION AND ROADMAP •Understand the Wolters Kluwer workforce and the future vision for how HR will support the business\. Maintain an aligned multi\-year technology roadmap that supports that vision, which includes maximizing what we have as well as researching, evaluating, and recommending high impact, high value HR technology solutions that improve/enhance HR's contribution to the organization •Work closely with the IT organization to develop and accurately understand the "current footprint" of HR technology and the planned future state and maintain a transition plan for how we evolve \(i\.e\. reduce redundant systems, effectively streamline processes, etc\.\) •Work closely with HR and business stakeholders to understand business requirements and integrate them into the roadmap as appropriate •Partner with HR COEs to design a reporting strategy to deliver the data required for meaningful analytical insights to drive people decisions TECHNICAL AND PROCESS LEADERSHIP •Continually assess and identify opportunities for improvement in the existing environment at any given point to better support global talent and people priorities •Act as subject matter expert in the standardization and implementation of global HR business processes, process controls, best practices, and policies/procedures that align to the requirements of SaaS technology; recommend business process improvements as appropriate •Manage the design, configuration, testing, implementation, documentation, communication and training for new tools or changes to existing tools\. Lead all technical work in keeping with established IT standards for Wolters Kluwer •Work closely with IT to ensure a proper governance structure and change control process to receive, evaluate, approve, and document process/system changes •Provide oversight and structure to HR projects with the objective of repeatable project success by organizing projects in a structured portfolio\. Partner with the HR PMO to provide methods, processes, and tools to effectively plan, execute, and monitor projects •Effectively manage ongoing relationships with vendor partners to cultivate collaborative relationships that support our success while also ensuring adherence to established processes and agreed upon service levels\. Define a set of metrics and reports for monitoring performance\. Promptly address any issues and drive to quick resolution FINANCIAL AND RISK MANAGEMENT •Develop and maintain the HR Systems budget; collaborate with IT as required to ensure we always maximize the value of our investments and work together to be as efficient as possible while maintaining a high standard of excellence •Plan, budget, and forecast HR Systems needs and application requirements\. Maintain awareness of vendor plans and the potential impact of those plans on current systems and the HR technology roadmap keeping executive management informed as appropriate •Partner with IT to ensure we have a robust and effective business continuity/disaster recovery plan that is tested on a regular basis; partner with IT, Legal, and vendor partners to ensure we maintain awareness of and comply with all regulatory mandates \(e\.g\. privacy\) USER EXPERIENCE AND SERVICE EXCELLENCE •Consistently strive for a consumer\-grade user experience \- intuitive, easy, effective\. Create forums to gather relevant "voice of the customer" feedback so that we consistently "delight" users in their interactions with HR •Be involved with vendor partners and internal owners of related systems \- know the roadmap and effectively plan, well in advance, for changes, improvements, and enhancements\. Ensure we have a properly scaled change and communication plan to support our user community as HR tools evolve TEAM LEADERSHIP AND DEVELOPMENT •Organize and lead the HR Systems team to manage day\-to\-day activities and deliver on commitments with quality\. Ensure the team is positioned to effectively support activities including, but not limited to system administration, reporting enablement, process configuration, business analysis, testing, security, controls, and data conversion •Cultivate a customer service mindset throughout the team with the goal of creating a seamless, excellent user experience for all users •Work with business sponsors and owners to identify training needs of administrators and employ cost efficient methods to ensure that the proper training is conducted so all team members are properly prepared for their respective roles **Qualifications:** Education Minimum: Bachelor's degree required, preferably in technology and/or human resources Preferred: Formal training in project management highly desirable Experience •10\+years of experience, five of which with global accountability, managing HR systems including on premise systems, cloud technology \(SaaS\), and vendor platforms that interface with company systems •Experience with all elements of a global HR technology ecosystem including HR shared services \(i\.e\. case management, document management, workflow portal\); HCM \(i\.e\. core system, talent management, compensation, recruiting\); Payroll; and other ancillary systems that round out a contemporary, robust HR technology footprint •SaaS HR System \(i\.e\. Workday, Success Factors, or similar\) configuration experience with clear view on roles, accountabilities, and an appropriate separation of duties •HR technology transformation experience required, moving from legacy on premise systems that are distributed to a modern, integrated cloud\-based technology infrastructure over time •Proven track record of successfully managing and delivering technology projects on schedule and on budget based on a structured approach to defining scope and requirements, developing project plans, managing issues/risks, creating meaningful project artifacts, monitoring project outcomes, and managing change •Proven ability to successfully navigate large corporate organizations building sponsorship and support across diverse stakeholder groups Other Knowledge, Skills, Abilities or Certifications •Strong business and analytical acumen, able to synthesize complex information and formulate an effective plan of action •Highly developed executive presence with strong collaboration skills; able to articulate a value proposition and secure buy\-in and support from executive leadership, peers, and staff •Experience building, developing, leading, mentoring, and managing high performance teams, particularly cross cultural and virtual\. Inspires people to create measurable results; highly approachable and supportive •Highly adept at successfully navigating and managing change \- communicates change effectively and completely; role models effective behaviors; builds commitment and overcomes resistance •Excellent verbal and written communication skills including the ability to prepare clear and succinct presentations for and speak comfortably to all levels of management, including senior executives •Demonstrates initiative and drive; self\-motivated, organized, and detail oriented; creative and analytical •Comfortable in a fast\-paced, global work environment undergoing transformation •Microsoft Office Suite \(Outlook, Word, Excel, Power Point, Project, and Visio\) TRAVEL REQUIREMENTS Occasional domestic and global travel Apply to: https://www\.wolterskluwer\.apply2jobs\.com/ProfExt/index\.cfm?fuseaction=mExternal\.showJob&RID=16996&CurrentPage=1 ABOUT WOLTERS KLUWER Wolters Kluwer N\.V\. \(AEX: WKL\) is a global leader in information services and solutions for professionals in the health, tax and accounting, risk and compliance, finance and legal sectors\. We help our customers make critical decisions every day by providing expert solutions that combine deep domain knowledge with specialized technology and services\. Wolters Kluwer reported 2016 annual revenues of €4\.3 billion\. The company, headquartered in Alphen aan den Rijn, the Netherlands, serves customers in over 180 countries, maintains operations in over 40 countries and employs 19,000 people worldwide\. Wolters Kluwer shares are listed on Euronext Amsterdam \(WKL\) and are included in the AEX and Euronext 100 indices\. Wolters Kluwer has a sponsored Level 1 American Depositary Receipt program\. The ADRs are traded on the over\-the\-counter market in the U\.S\. \(WTKWY\)\. For more information about our solutions and organization, visit www\.wolterskluwer\.com, follow us on Twitter, Facebook, LinkedIn, and YouTube\. EQUAL EMPLOYMENT OPPORTUNITY Wolters Kluwer U\. S\. Corporation and all of its subsidiaries, divisions, and customer/business units is an Equal Opportunity / Affirmative Action employer\. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status\. INFORMATION For any assistance with your application for this job opening, please call the HR Source at \(888\) 495\-4772 or email HRSource@WoltersKluwer\.com\. TTY is also available at 888 \(495\) 4771\.
              (SGP) Project Managers – Clearance (RCS K) (Singapore / Malaysia)   
    42215 **Work Location : Singapore / Malaysia (Contract will be based on candidate's location) - 2 Openings** **Overall Role Purpose** + To manage and deliver IT projects from build to deployment, within agreed time, cost and quality + To lead virtual teams and IT vendors during project lifecycle + To transition hosting, support and service management of new capabilities post-development + To maintain existing solutions depending on new or changed business requirements **Accountabilities** Customers, Stakeholders and People Management + Build effective relations with business functions + Lead and manage virtual IT teams and external IT vendors for the duration of projects Project and Process + Ensure that solutions are delivered to agreed budget, scope and timelines, in all project phases + Lead during systems requirements, analysis, design, development, and implementation of solutions + Adhere to Express standards including Architecture, security, development and process **Requirements** + Degree in Computer Science, Information Technology, Business Administration or related fields + Total of 5 to 7 years work experience in development and project management, with strong systems analysis, design, programming, testing or implementation skills + Exposure to: + Service-oriented, event-driven architecture and technologies (e.g. Software AG WebMethods products ESB, BPM, Rules Engine, Complex Events Processing; WebSphere, MQ; Java) + Cognitive technologies for data mining, machine learning, advanced analytics, cognitive analysis, natural language computing and predictive analysis + Operational or express logistics knowledge with preferable exposure in Clearance processes, systems and data + Passion for technology + Good communicator, team player, positive and can-do attitude + Able to work independently (e.g. individual contributor) as well as in a team (e.g. group contributor)
              Samuel Cooper   
    Institution/Organization: Fayetteville State University Department: Mathematics and Computer Science Academic Status: Undergraduate Student What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Identification, Design, and Control Scientific Software and High-Performance Computing Interests: I am interested in computer security, machine learning and artificial intelligence, and […]
              Bogdan Czejdo   
    Institution/Organization: FSU Department: Math and CS Academic Status: Faculty What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Interests: Machine Learning
              เครื่องสแกน ที่สนามบิน   

    เบื่อมั้ยใหม่กับการรอต่อคิวยาวๆเพื่อสแกนกระเป๋าด้วยสายพานที่สนามบิน ทุกครั้งที่มีเราโดยสารด้วยเครื่องบิน สิ่งที่ทุกคนต้องผ่าน ไม่ว่าจะเดินทางในประเทศ หรือระหว่างประเทศ ก็คือเครื่องสแกนหรือที่หลายคนเรียกว่า X-ray ใช่มั้ยครับ และก็มักจะเป็นที่น่ารำคาญทุกๆ ครั้ง ที่คิวสแกนจะยาวเป็นหางว่าว ใช้เวลานานกว่าจะผ่านได้

    ระบบรักษาความปลอดภัยของสนามบินทั่วโลก จะบังคับให้ผู้โดยสารทุกคนผ่านเครื่องตรวจ หรือ ที่เราเรียกว่าเครื่องสแกน ที่สนามบินก่อนขึ้นเครื่องบินพาณิชย์ครับ โดยจะแบ่งออกเป็น 2 เครื่อง คือเครื่องตรวจมนุษย์ ที่มีลักษณะเป็นประตู ต้องเดินผ่าน หรือถ้าใหม่ๆ ก็จะเป็นเครื่องสแกนแบบพกพา(Portable Scanner) สแกนทั้งร่างกาย ที่ต้องไปยกมือนิ่งๆ ค้างไว้ และ เครื่องสแกนตรวจสัมภาระที่ถือขึ้นเครื่องมีลักษณะเป็นสายพาน เข้าเครื่อง X-ray โดยมีเจ้าหน้าที่ดูหน้าจอกันแบบชิ้นต่อชิ้น อันนี้ไม่นับรวมกระเป๋าที่โหลดไปใต้เครื่องนะครับ ที่จะใช้วิธีการสแกนอีกรูปแบบหนึ่ง

    วันนี้มีนวัตกรรมใหม่ที่ช่วยอำนวยความสะดวกให้คุณไปได้ไวขึ้น เมื่อหน่วยพัฒนาและวิจัยของ Transportation Security Administration พยายามหาวิธีใหม่ๆในการสแกนกระเป๋าและสัมภาระที่เราติดตัวขึ้นเครื่อง จนได้ออกมาเป็น Qylatron ระบบล็อคเกอร์สำหรับเป็นเครื่องสแกน(Scanner)ใช้ในสนามบิน มันมีลักษณะคล้ายรังผึ้ง มีช่องใส่ของอยู่ 5 ช่อง แต่เครื่องสแกนจะทำการสแกนได้พร้อมกัน 2-3 ช่องเท่านั้นเพื่อป้องกันการติดขัด ตัวระบบรองรับการสแกน 600 คนต่อชั่วโมง

    วิธีใช้งานไม่ยากเลยล่ะที่สนามบิน เดินมาหน้าเครื่องสแกน จากนั้น ก็นำสัมภาระใส่เข้าตู้แล้วเดินผ่านเครื่องสแกน ไปรับของคืนด้วยการเปิดตู้จากอีกฝั่งนึง ตัวเครื่องสแกน Qylatron จะใช้รังสี X-rays ในการสแกนกระเป๋า ร่วมกับ machine learning ที่มันเรียนรู้เพิ่มความสามารถอยู่ตลอด และระบบตรวจจับสารเคมี/วัตถุที่มีการแผ่รังสีเพื่อระบุว่าอะไรอยู่ในกระเป๋า เมื่อมันเจอวัตถุต้องสงสัยก็จะแจ้งเตือนทันที ซึ่งตัวซอฟท์แวร์นั้นสามารถอัพเดทได้เมื่อมีภัยคุกคามใหม่ๆเพิ่มเข้ามา

    นอกจากระบบที่ทันสมัยแล้ว ตัวเครื่องสแกน Qylatron ยังช่วยประหยัดพื้นที่ด้วย เพราะมันใช้พื้นที่แค่ 450 ตารางฟุตเท่านั้น เทียบกับสายพานปกติที่ใช้พื้นที่มากกว่า 2,500 ตารางฟุต ระบบนี้เคยถูกนำมาทดสอบที่ฟุตบอลโลกที่บราซิล รวมถึง Disneyland ที่ปารีส ส่วนสถานที่ล่าสุดที่นำมาใช้ก็คือ Levi’s Stadium ที่ซานฟรานซิสโก ถ้าการทดสอบต่างๆเป็นไปได้ด้วยดีเราคงจะได้เห็นที่สนามบิน Qylatron นำมาใช้ในสนามบินอย่างแน่นอน

    Cr.Dailygizmo


              Conrad Czejdo   
    Institution/Organization: UNC Department: Arts and Sciences Academic Status: Undergraduate Student What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Multiphysics and Multiscale Computations Numerical Linear/Multilinear Algebra Scientific Software and High-Performance Computing Interests: Machine learning methods applied to protein folding scoring.
              Shokoufeh Mirzaei   
    Institution/Organization: Cal Poly Pomona Department: Industrial and Manufacturing Department Academic Status: Faculty What conference theme areas are you interested in (check all that apply): Data-Driven Modeling and Prediction Numerical Linear/Multilinear Algebra Simulations on Emerging Architectures Surrogate and Reduced-order Modeling Verification, Validation, Uncertainty Quantification Interests: Application of machine learning in Computational Biology for identifying native-like protein […]
              Laura Nivens   
    Institution/Organization: Kansas Wesleyan University Department: Department of Computer Studies Academic Status: Undergraduate Student What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Interests: data analytics data visualization machine learning Non-Work Related Activities: Kansas Wesleyan Philharmonic Choir Kansas Wesleyan String Orchestra Kansas Wesleyan Pep Band […]
              Javier Rojas   
    Institution/Organization: St. Thomas University Department: School of Science, Technology & Engineering Management Academic Status: Graduate Student What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Scientific Software and High-Performance Computing Interests: Applied Mathematics Computer Science Big Data Analytics Data Mining Machine Learning  
               Collective classification for labeling of places and objects in 2D and 3D range data    
    Triebel, Rudolph and Martinez Mozos, Oscar and Burgard, Wolfram (2008) Collective classification for labeling of places and objects in 2D and 3D range data. In: Data analysis, machine learning and applications. Studies in Classification, Data Analysis, and Knowledge Organization . Springer, Germany, pp. 293-300. ISBN 9783540782391, 9783540782469
              MACHINE-LEARNING MARKET GROWTH AND TRENDS (Kusum Rautela)   
    The global Machine Learning Chip Market is expected to attain a market size of $7.9 billion by 2022, growing at a CAGR of 9% during the forecast period.
              Udacity 30-Second TV Spot: "The Jobs of Tomorrow Start Here"   

    Udacity offers world-class learning opportunities in innovative fields such as artificial intelligence, virtual reality, and self-driving cars, as well as mobile and web development, data science, machine learning, and more.

    Virtually anyone on the planet with an internet connection and a commitment to self-empowerment through learning can come to Udacity, master cutting-edge skills, and secure rewarding employment.

    We work with some of the most forward-thinking companies in the world to help build curriculum, and our hiring partners help ensure our graduates get great jobs. Companies we work with include Google, Facebook, AT&T, Mercedes-Benz, IBM, Nvidia, Hack Reactor, GitHub, Amazon, and more.

    Enroll today at udacity.com, and discover why the jobs of tomorrow start here.

    Cast: Udacity


              (USA-TX-HOUSTON) Research and Development Engineer – Applied Science – Houston, TX   
    LOCATION Houston, TX 77040 EMPLOYMENT STATUS Full Time Regular ABOUT THIS JOB Baker Hughes Incorporated has an opening for a Research and Development Engineer – Applied Science at our facility in Houston, TX. As a leader in the oilfield services industry, Baker Hughes offers opportunities for qualified people who want to grow in our high performance organization. Our leading technologies and our ability to apply them safely and effectively create value for our customers and our shareholders. Baker Hughes is an Equal Employment Affirmative Action Employer. ROLE SYNOPSIS Baker Hughes is seeking a Research and Development Engineer to be part of a multidisciplinary team of scientists and engineers inventing, developing, and deploying advanced computational tools for the optimization of oil and gas production systems. Primary role may involve the development of techniques and analysis tools for the interpretation of measurement data from downhole sensors, specifically those by optical sensing fiber interrogators. The position will play a key role in the design, implementation, and field deployment of such analysis tools into software packages. KEY RESPONSIBILITIES/ACCOUNTABILITIES + The position will be responsible for product development projects involving a number of new technical software tools. + The successful candidate should be comfortable with coding, scripting, and debugging technical algorithms that use physics-based models, machine learning and other statistical techniques, numerical methods, and/or optimization routines. + The candidate should be comfortable providing leadership in the development of software – from the prototype stage through final release – and guiding projects through a rigorous product development process. + The position will be responsible for system feature design based on application engineering requirements. + Candidate should be comfortable providing support and guidance to field personnel for field trails and the deployment of prototype products and/or services. + Candidate will participate in the creation of new technical ideas and will be responsible for documenting new ideas and filing patents and writing/submitting papers to trade journals and professional organizations as required. + Duties may also include physical experimentation. + Candidates with a background in the design and implementation of technical software are strongly encouraged to apply. ESSENTIAL QUALIFICATIONS/REQUIREMENTS + A Bachelor’s degree in computer science, physics, applied mathematics, or similar. + 3 – 5 years of experience designing and implementing scientific code and/or technical software. + Extensive programming experience in Java, C , and C# + The ability to plan and work towards milestones independently is required in this position. + The successful candidate must demonstrate technical innovation, problem solving skills, and independent initiative, and will display a general proficiency in describing complex systems to a wide audience. + Will require strong programming skills, effective verbal and written communication skills, customer focus, and an ability to function well in a highly dynamic, team oriented environment. PREFERRED QUALIFICATIONS/REQUIREMENTS + Experience with Matlab, R, Python/Jython a plus. + Experience with distributed fiber sensing technologies and systems, especially and experience with such technologies used in the oil and gas industry. + Candidates with experience developing and/or using codes to numerically solve fluid & heat transport problems are strongly encouraged to apply. + Experience with reservoir simulation or modeling OTHER DETAILS + Successful candidate will be authorized to work in the US without sponsorship. COMPANY OVERVIEW Baker Hughes is a leading supplier of oilfield services, products, technology and systems to the worldwide oil and natural gas industry. By being the service company that best anticipates, understands and exceeds our customers' expectations, Baker Hughes Advances Reservoir Performance. The company's 39,000-plus employees work in more than 80 countries in geomarket teams that help customers find, evaluate, drill, produce, transport and process hydrocarbon resources. Baker Hughes' technology centers in the world's leading energy markets are pushing the boundaries to overcome progressively more complex challenges. Baker Hughes develops solutions designed to help manage operating expenses, maximize reserve recovery and boost overall return on investment through the entire life cycle of an oil or gas asset. Collaboration is the foundation upon which Baker Hughes builds our business and develops next-generation products and services for drilling and evaluation, completions and production and fluids and chemicals. For more information on Baker Hughes' century-long history, visit our website. _Baker Hughes is an Equal Employment Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to age, gender, gender identity, marital status, pregnancy, race, national origin, ethnic origin, color, disability status, veteran status, religion, sexual orientation or any other protection guaranteed by local law._ _If you are applying to a position in the US and you are an individual with disability or a disabled veteran status, religion, sexual orientation or any other protection guaranteed by local lawran and would like any type of assistance to submit an application or to attend any recruitment or selection event, we would like to help you to ensure that your experience is as smooth as possible. If you need assistance, information, or answers to your questions, feel free to contact us or have any of your representatives contact us at Baker Hughes Application Assistance Toll Free at 1-866-324-4562. This method of contact has been put in place ONLY to be used by those internal and external applicants who have a disability and are requesting accommodation._ _For all other inquiries on your application, log in to your profile and click on the My Jobpage tab. General application status inquiries will not be handled by the call center._ **Job:** _Research and Development_ **Title:** _Research and Development Engineer – Applied Science – Houston, TX_ **Location:** _NORTH AMERICA-UNITED STATES-Texas-HOUSTON_ **Requisition ID:** _1706480_
              Ask About: Machine Learning and Genetics   

    World Economic Forum posted a photo:

    Ask About: Machine Learning and Genetics

    Participants during the session "Ask About: Machine Learning and Genetics" at the World Economic Forum - AMNC 17, Annual Meeting of the New Champions in Dalian, People's Republic of China 2017. Copyright by World Economic Forum / Ciaran McCrickard


              PR: 「機械学習=面倒」はもう古い、Azure ML活用ソリューションの便利度は   
    機械学習によるデータ分析は用意が大変で、運用にコストがかかる。そんなイメージを変える「Azure Machine Learning」活用の新サービスが登場した。
              At I/O, Android Takes Backseat to Machine Learning   
    As Google puts its machine learning at the forefront, Android is just another platform.

              RuleML+RR with DecisionCamp - July 12-14 201, London   
    RuleML,  Web Rules and Reasoning and DecisionCamp are all colocated this year in London.
    RuleML+RR home, schedule, registration
    DecisionCamp home, schedule, registration

    Explore that latest AI happenings at RuleML+RR and keep up to date with the latest Decision Model and Notation (DMN) at Decision Camp.

    When: July 12-14 2017
    Where: Birkbeck, University of London, London, UK
    Malet St, London WC1E 7HX, UK

    A number of Red Hat Engineers will be there and presenting:
    Mark Proctor - Drools co-founder, BRMS and BPMS Platform Architect:
    Edson Tirelli - Drools project lead: DMN Technology Compatibility Kit (TCK), Demystifying the Decision Model and Notation Specification
    Geoffrey De Smet - OptaPlanner founder, project lead: Real-time Constraint Solving with OptaPlanner

    DecisionCamp:
    "DecisionCAMP-2017 will include presentations from leading decision management authorities, vendors, and practitioners. The event will explore the current state in Decision Management, the real-world use of the DMN standard, and solutions to various business problems using Decision Management  tools and capabilities. The event will include a special Open Discussion “What you Like and What you Do Not Like about DMN” and a QnA Panel “Real-world Business Decision Management: Vendor and Practitioner Perspectives”.

    RuleML+RR:
    "2017 is the leading international joint conference in the field of rule-based reasoning, and focuses on theoretical advances, novel technologies, as well as innovative applications concerning knowledge representation and reasoning with rules."

    Key Note Speeches:
    Bob Kowalski (Imperial College London): Logic and AI – The Last 50 Years
    Stephen Muggleton (Imperial College London): Meta-Interpretive Learning: Achievements and Challenges
    Jordi Cabot (IN3-UOC, Barcelona): The Secret Life of Rules in Software Engineering (sponsored by EurAI)
    Jean-Francois Puget (IBM): Machine Learning and Decision Optimization
    Elena Baralis (Politecnico di Torino): Opening the Black Box: Deriving Rules from Data



              Google’s AlphaGo Defeats World’s Top Human Go Player   
    Computers mastered chess decades ago, but Google’s DeepMind has its sights set on Go, a more complex game that pushes machine learning to its limits. After defeating several of the top players in the world, the “AlphaGo” AI has now bested Ke Jie, the current Go world champion. It was a narrow victory, and there […]
              Microsoft to sell Box storage to Azure customers   
    Microsoft has announced a new tie-up with Box that will extend the intelligence and reach of its Azure cloud platform. Under the terms of the deal, Box will now use Azure as a strategic cloud platform, with a new "Box on Azure" now being offered out to enterprise customers around the world. However the partnership will also see Box getting the chance to use Azure’s artificial intelligence and machine learning capabilities for the first time. This could potentially soon mean that Box customers would be able to use highly advanced tools such as advanced content processing, and voice control, to power… [Continue Reading]
              Senior Digital Analyst - Best Buy - Richfield, MN   
    Real world experience in machine learning. Use full ecosystem of data sources, internal and external, to &quot;connect the dots&quot; as it relates to all consumer touch...
    From Best Buy - Wed, 24 May 2017 19:40:58 GMT - View all Richfield, MN jobs
              Senior Linguist - Approgence Inc. - Mountain View, CA   
    MS Office, Adobe Suite, graphics packages). Support Machine Learning experts in language specific areas for our target language (US English)....
    From Approgence Inc. - Fri, 19 May 2017 18:19:23 GMT - View all Mountain View, CA jobs
              Senior UX Designer - Daimler - Sunnyvale, CA   
    MBRDNA is headquartered in Silicon Valley, California, with key areas of Advanced Interaction Design, Digital User Experience, Machine Learning, Autonomous...
    From Daimler - Thu, 01 Jun 2017 23:01:07 GMT - View all Sunnyvale, CA jobs
              Senior Machine Learning Engineer, Security - Adobe - San Jose, CA   
    The Enterprise Security Team at Adobe is reinventing how users and devices get identified and connect to the internal resources across Adobe using machine...
    From Adobe - Wed, 26 Apr 2017 01:48:07 GMT - View all San Jose, CA jobs
              Sr. Data Science Engineer - Adobe - San Jose, CA   
    Develop predictive models on large-scale datasets to address various business problems through leveraging advanced statistical modeling, machine learning, or...
    From Adobe - Fri, 26 May 2017 06:25:59 GMT - View all San Jose, CA jobs
              Senior Program Manager - Adobe - San Jose, CA   
    Adobe is an equal opportunity employer. Become part of this growing team at Adobe and make a big impact by providing search, browse, machine learning and...
    From Adobe - Thu, 20 Apr 2017 19:29:50 GMT - View all San Jose, CA jobs
              Cross-Fitting Double Machine Learning estimator   
    By Gabriel Vasconcelos Motivation In a late post I talked about inference after model selection showing that a simple double selection procedure is enough to solve the problem. In this post I’m going to talk about a generalization of the … Continue reading →
              Software Engineer - Cloud / DevOps - Manchester / People Source / Manchester, Lancashire, United Kingdom   
    People Source/Manchester, Lancashire, United Kingdom

    Software Engineer with Cloud / Automation / DevOps experience required for a global big data company in Manchester.

    This is a unique opportunity to transform the delivery of cutting edge customer science and machine learning research, and leverage some unique access to big data.

    The role will involve creating systems that scale well with predictable performance. You will implement infrastructure design as code using a mix of Python, Ansible, and Unix scripting plus other tools when needed. This will lead to the creation of continuous delivery pipelines which establish reliable and secure deployment.

    You might come from an infrastructure of software development background, and you must bring a keen interest in the systems, architecture and software needed to deliver complex analytics and applications using open source and cloud platforms.

    I'm look looking for people with good experience in the design, development, and testing of infrastructure as code / DevOps, including:

    *DevOps tooling such as Chef, Puppet or Ansible

    *The tools used to automate public cloud systems (Google Cloud, AWS, Azure)

    *Unix and / Windows shell scripting and administration

    *Management of data platforms, e.g. Hadoop, SQL Server, Postgress etc.

    *Networks

    You must be able to demonstrate:

    *Experience administering and managing relational and non-relational databases.

    *Experience with software development languages and different language types, such as Python, Java, Scala, .NET, C, SQL, C++, golang.

    *Strong Agile experience : Iterative Development, Refactoring, Unit Testing, CI

    You'll be part of a large project that is moving from their old proprietary systems to open source, creating the next generation data platform.

    You will receive 25 days leave, a good bonus scheme and a flexible benefits package. On the last Friday of the month the whole company finish at 2pm, and 4pm on all other Fridays.

    Other benefits include:

    Subsidised masseuse and osteopath (Visits office weekly), Travel and Parking Season Ticket Loan, Fruity Wednesdays, 25 days leave, your birthday off, free use of company gym and critical illness cover.

    It's based in central Manchester with easy commuting links.

    If you have the relevant experience please apply now.

    People Source Consulting Ltd is acting as an Employment Agency in relation to this vacancy.

    People Source specialise in technology recruitment across niche markets including Information Technology, Digital TV, Digital Marketing, Project and Programme Management, SAP, Digital and Consumer Electronics, Air Traffic Management, Management Consultancy, Business Intelligence, Manufacturing, Telecoms, Public Sector, Healthcare, Finance and Oil & Gas.

    Employment Type: Permanent

    Pay: 48,000 to 60,000 GBP (British Pound)
    Pay Period: Annual
    Other Pay Info: £48000 - £60000 per annum + bonus + benefits

    Apply To Job
              GPU computing key to machine learning and big data performance   
    While the CPU remains central to data processing, massive gains in the area of AI analytics and dig data performance are being seen when GPU computing is thrown into the mix.
              Big data recognition technology the next frontier for machine learning   
    It's one thing to have big data, but it's another to be able to understand it. That's why big data recognition technology is so important to the world of machine learning.
              Futures: Deep learning and health - the hurdles machine learning must leap   
    Startups and Silicon Valley giants are pushing into medicine with artificial intelligence and deep learning.
              Data Scientist (Machine Learning)   

              (Associate) Data Scientist for Deep Learning Center of Excellence - SAP - Sankt Leon-Rot   
    Build Machine Learning models to solve real problems working with real data. Software-Design and Development....
    Gefunden bei SAP - Fri, 23 Jun 2017 08:50:58 GMT - Zeige alle Sankt Leon-Rot Jobs
              The Evolution of Employment in the AI Era – Intel Chip Chat – Episode 518   
    In this Intel Chip Chat audio podcast with Allyson Klein: Reuven Cohen, technology executive, entrepreneur, and mentor, joins us live from Intel AI Day in San Francisco. Cohen is a pioneer of the infrastructure-as-a-service space with more than 15 years of experience in cloud computing. In this interview, Cohen discusses how machine learning and intelligent [...]
              Solutions Architect - NVIDIA - California   
    Be an internal champion for Data Analytics and Machine Learning among the NVIDIA technical community. Do you visualize your future at NVIDIA?...
    From NVIDIA - Fri, 14 Apr 2017 11:01:35 GMT - View all California jobs
              Software Engineer / Sr Software Engineer - Applications - LG Electronics - Santa Clara, CA   
    Exposure to AR / VR / Machine Learning (preferred). Ability to quickly prototype and debug code around variety of hardware (RPi, NVidia, etc)....
    From LG Electronics - Fri, 09 Jun 2017 04:34:18 GMT - View all Santa Clara, CA jobs
              Senior Cloud Security Architect - NVIDIA - Santa Clara, CA   
    Do you visualize your future at NVIDIA? Machine Learning, Deep-Learning, Artificial Intelligence – particularly in Regression or Forecasting,....
    From NVIDIA - Sat, 20 May 2017 16:05:22 GMT - View all Santa Clara, CA jobs
              Director Federal - NVIDIA - Washington, DC   
    Machine learning, data analytics, and artificial intelligence experience preferred. Grow revenue and market share for NVIDIA DGX-1, Tesla and GRID products....
    From NVIDIA - Thu, 08 Jun 2017 10:22:02 GMT - View all Washington, DC jobs
              Predicciones y Tendencias #eliax para el 2017 (ACTUALIZADO)   
    Fuente: eliax.com

    eliax predicciones y tendencias 2017eliax, para mentes curiosas...Hola amig@s lectores,

    Como ya es tradición, y por 12vo (¡doceavo!) año consecutivo, es hora de mi tradicional lista de predicciones y tendencias para el nuevo año, en este caso el 2017.

    Y como hago en años recientes (para combatir los Trolls), les advierto que esta es una lista tanto de predicciones como de tendencias, lo que significa que verán algunas cosas en la lista que serán bastante obvias. Tomen esto más como un divertido mapa intelectual de referencia para debatir y pensar sobre lo que veremos con miras a este nuevo año ;)

    Y una cosa más muy importante: A diferencia de las tendencias, el tema de las predicciones por lo general no las repito cada año pero muchas por lo general se cumplen 2 o incluso 3, 4 o 5 años después, por lo que recomiendo se den una ojeada por las predicciones anteriores (enlaces a todas ellas al final de este artículo), pues si nos llevamos de lo ocurrido en años pasados, algunas de ellas se cumplirán este año.

    Así que sin más que decir, iniciemos...


    1. El Apple VR
    El secreto peor guardado de la actualidad es que Apple está trabajando en alguna plataforma de Realidad Virtual/Aumentada, y dado los grandes avances que están haciendo los competidores, creo que es hora que en el 2017 Apple revele tal plataforma, la cual muy posiblemente esté ligada a su nuevo iPhone del 2017 y sea anunciada alrededor de Septiembre.

    En cuanto al nombre, hay varias opciones: Apple VR, iVR, iPhone VR, VR Pod, Apple Reality, iReality, iGlass, etc... (algunos de estos nombres ya han sido utilizado antes, pero Apple puede negociarlos como hizo con el nombre iOS con CISCO).

    Posibilidades: 70%


    2. El Apple iHome
    No se cómo se terminará llamando esto, pero en esencia Apple tarde o temprano tendrá que responder y sacar un competidor al super popular Amazon Echo, el cual es por el momento solo mercadeado en los EEUU pero de los cuales ya hay 6 millones de ellos instalados en hogares de esa nación, lo que habla del potencial de esa tecnología.

    Para los que desconozcan que es un Amazon Echo, es esencialmente un interfaz de voz para convertir tu hogar a un inteligente (un "Smart Home") en forma de parlante cilíndrico que se conecta a Internet y responde con el asistente te inteligencia artificial Alexa, para responderte preguntas, tocar música, controlar las luces de tu casa, etc.

    Google ya respondió con el Google Home hace unas semanas atrás, y ahora le toca a Apple...

    Posibilidades: 80%


    3. El Nuevo iPhone (8)
    Es más probable que Apple saque un nuevo iPhone cada año a que salga el Sol mañana, por lo que hasta ahí estamos todos de acuerdo. Los rumores dicen que tendrá una pantalla que cubrirá casi toda su superficie, con el sensor biométrico de huellas digitales escondido detrás de la pantalla, y posiblemente los parlantes también incrustados en la pantalla, y de paso será el primer iPhone que se cargue de forma inalámbrica sin necesidad de conectores eléctricos alambrados. Así mismo vendrían con mejores cámaras duales, e incluso alguna capacidad para Realidad Aumentada.

    Se dice que uno de los nuevos iPhones tendrá un tamaño de 5", lo que podría significar o (1) que tendremos un iPhone en un tamaño intermedio entre el iPhone 7 y el iPhone 7 Plus, o (2) que en realidad estamos hablando del mismo iPhone 8/Plus que debido a que será "todo pantalla" podrá reducir su tamaño externo sin reducir el tamaño de la pantalla en sí.

    Aparte de eso, hay rumores de que veremos no uno, ni dos, sino que tres nuevos modelos de iPhones, y a tal fin tengo mis propias teorías...

    Una teoría, la más popular que todos comparten, es que veremos un iPhone 7S, y un iPhone 7 Plus, y un iPhone 8 para conmemorar el décimo aniversario del iPhone. ¿O quizás terminen llamando al tercer modelo el iPhone 7 Special Edition, o iPhone 7 10th Anniversary Edition?

    Sin embargo me pregunto si ya no es hora de dejar los números y pasar de regreso al nombre simplemente "iPhone", y que se mercadee como "El nuevo iPhone".

    Otra idea que se me ocurre y que compartí hace varios meses con ustedes en las redes sociales de #eliax es la idea de adoptar la nomenclatura de la industria de la moda, y llamar los nuevos iPhones simplemente iPhone S, iPhone M, y iPhone L, para significar Small, Medium y Large, lo que creo caería bastante bien con la mentalidad simplista y de moda que atribuye Apple a sus productos.

    Posibilidades: 100% (de un nuevo iPhone 8)
    85% de que tenga las características aquí descritas
    50% de que adopten este sistema de nombres


    4. Actualización de productos de Apple
    Esperen un nuevo teclado externo e inalámbrico con la barra touch (Touch Bar) que debutó en las nuevas MacBook Pro, así como Touch ID para leerte la huella digital.

    Además esperen una nueva iMac con un nuevo teclado con Touch Bar, aunque podría ser alambrado. Esta iMac quizás por fin podría tener una pantalla Touch como las Surface de Microsoft, pero no aguanten su respiración esperando por ello...

    En cuanto a la MacBook Air, ya no tiene razón de existir después que salió la nueva MacBook, por lo que quizás Apple retire ese producto.

    Es posible que algunas de las nuevas Macs vengan con una nueva arquitectura (como detalle unas cuantas predicciones más abajo).

    Posibilidades: 75%


    5. ¿Adios al iPod?
    El iPod fue el dispositivo que lanzó a Apple a una nueva Era en el espacio de consumidores, abriendo la puerta al iPhone, iPad y Apple Watch.

    Sin embargo, hoy día ya es irrelevante en un mundo donde toda la música la cargamos con nosotros en celulares. Así que al menos que Apple haga algo radical (como incluir iTunes HD multi-canal o algo por el estilo para ocupar un nicho de fanáticos del audio), creo que este año podría ser su fin. Me gustaría una versión especial para audiófilos (¿se dice así?).

    Posibilidades: 55%


    6. ¿Un Surface Phone?
    Desde que salió el primer iPhone, Microsoft en el mundo móvil ha ido en declive, primero con su plataforma Windows Mobile (era demasiado complicado, tratando de entrar desesperadamente al Windows de escritorio en la palma de la mano), y posteriormente con Windows Phone (que llegó demasiado tarde cuando iPhone y Android ya estaban establecidos).

    Pero la división Surface de Microsoft ha demostrado tener un equipo de diseño e ingeniera que rivaliza incluso a Apple, por lo que la empresa quizás se anime a un nuevo intento más con un Surface Phone basado esta vez en Windows 10.

    Sin embargo, sus posibilidades de éxito son bajas, dada la gran complacencia que tienen los usuarios con el duopolio iPhone-Android.

    Posibilidades: 55%


    7. Redes sociales que suben y bajan
    Vamos a condensar varias predicciones sobre redes sociales en una sola a continuación:

    7.1: Snapchat, Instagram, Whatsapp y Facebook continuarán creciendo vertiginosamente.

    7.2: El IPO (salida al mercado público de forma temprana) de Snapchat será sensacional, pero ya veremos si podrá aguantar los azotes de Facebook a largo plazo (lo más probable es que no, y que sea adquirida, quizás por Microsoft o Samsung, al menos que se reinvente con Realidad Aumentada o se expanda a otras áreas).

    7.3: Twitter va a seguir decayendo, y quizás sea adquirida por otro gigante (Microsoft o Samsung).

    7.4: La red Google+ continuará siendo mayoritariamente irrelevante, e incluso (al menos que Google haga algo drástico y efectivo) declinará mientras los usuarios prestan más atención a redes más efectivas y capten mejor su atención.

    Posibilidades: 70%


    8. Fitbit a ser adquirida
    Ahora que Fitbit (empresa especializada en bandas inteligentes para deportes y salud) adquirió a Pebble, es probable que esta se torne apetecible a un pez más grande para adquirirla.

    Posibilidades: 65%


    9. Seguridad en IoT un dolor de cabeza
    El IoT (Internet de las Cosas) es una industria que incluye desde cámaras de vigilancia conectadas a Internet, hasta grabadoras de video por Internet, juguetes conectados a Internet, televisores conectados a Internet, sensores industriales conectados a Internet, y cualquier otra cosa que se puedan imaginar conectado a Internet.

    Sin embargo, el 2016 demostró el gran problema de seguridad que tenemos a la mano ya que cientos (y próximamente, miles) de millones de estos dispositivos tienen seguridad inapropiada, lo que los hace un objetivo fácil para que los hackers intervengan y tomen control, creando redes de millones de dispositivos "zombie" que son posteriormente utilizados, rentados a vendidos al mejor postor (en el mercado negro de la Web Profunda o Web Oscura) para coordinar ataques de DDoS, enviar virus, troyanos, invadir otras máquinas, etc.

    Este año 2016 se produjeron al menos 3 ataques a gran escala, uno de ellos ralentizando a todo el Internet global por varias horas, y otros enviando ataques que superaron los 600Gbps (600 Gigabits por segundo) de datos.

    Y lo peor es, que en el 2017 veremos esto escalar exponencialmente a peores ataques...

    Posibilidades: 90%


    10. Legislación sobre IoT
    Debido al punto anterior, vamos a ver las primeras discusiones serias (han existido informalmente, por años) a nivel legislativo sobre reglamentación para la construcción de dispositivos IoT, que garanticen al menos un grado mínimo de seguridad para ser colocados en internet. Es posible que sea la FCC de los EEUU que lidere esto eventualmente.

    Posibilidades: 65%


    11. Real-time despega en redes sociales
    En los últimos meses (y particularmente en las últimas semanas del 2016) los grandes del espacio de redes sociales sacaron actualizaciones que permite que las personan transmitan video en tiempo real en sus redes.

    Hoy día ya es posible hacer eso con Facebook, Instagram y Twitter, para nombrar algunos. Esto se intensificará y ya será una funcionalidad básica esperada en cualquier red social seria del 2017.

    Posibilidades: 90%


    12. Servicio al Cliente por video
    Debido a la proliferación de herramientas de video en tiempo real en redes sociales (como explicado en el punto anterior), y herramientas de comunicación que permiten hacer trivial el tema de video-llamadas (Facetime, Whatsapp, Messenger, Google Duo, etc), se hará mucho más fácil y práctico (sin mencionar, barato) que las empresas inicien a dar soporte técnico y servicio al cliente por video.

    Las que inicien en esto lo harán como una ventaja competitiva y como un nuevo canal de ventas y servicios.

    Posibilidades: 80%


    13. inteligencia Artificial (AI) a crecer vertiginosamente, empezando en Asistentes Virtuales
    La guerra entre Apple Siri, Google Now, Microsoft Cortana, Samsung Viv, Amazon Alexa y otros (como Hound) se intensificará en el 2017, así como veremos otros competidores (que posiblemente serán rápidamente adquiridos). El campo de la AI será uno de los campos de mayor inversión en el 2017, junto a (leer próximo punto)...

    Posibilidades: 100%


    14. El campo de la Realidad Virtual/Aumentada/Mixta a calentarse al extremo en el 2017
    El año pasado les comenté que el 2016 sería el año que la VR y AR (por sus siglas en inglés) despegarían, y el 2017 será el año en que estas tecnologías se pondrán al rojo vivo en términos de inversión.

    Posibilidades: 95%


    15. Realidad Aumentada todavía no pasará a Realidad Virtual
    Aunque todo el mundo está de acuerdo que AR sería más importante que VR, lo dudo que ese sea el caso en el 2017 debido a la complejidad y costo de implementación de AR. La Realidad Aumentada necesita de muchísimo mayor procesamiento, de algoritmos muchos más complejos, de circuitos y sensores más avanzados (y caros), y de software altamente especializado, requiriendo mano de obra bastante experta. Por el momento las soluciones de VR son más sencillas de implementar y ya existe un mercado masivo para ellas.

    Posibilidades: 80%


    16. Grandes avances en Realidad Aumentada
    Aunque la AR paso detrás de VR en el 2017, eso no significa que estará estancada, y al contrario veremos bastante competencia tanto en gafas de AR como en sus componentes básicos (como espejos holográficos, circuitos integrados dedicados, algoritmos de mezcla de Realidad, etc). AR seguirá creciendo vertiginosamente en paralelo a VR en el sector industrial y campos especializados como la industria médica.

    Posibilidades: 75%


    17. Celulares empiezan a moverse hacia un futuro inalámbrico
    El iPhone 7 dejó atrás el conector de audio tradicional analógico de 3.5mm, y se rumora que el próximo iPhone permitirá cargar inalámbricamente, y aunque de por sí solo estas no son grandes novedades (algunos modelos de celulares Android se han podido cargar inalámbricamente por años), Apple demostró la gran ventaja de su potente tecnología W1 para audio (con sus nuevos audífonos inalámbricos AirPods), de lo fácil que es enviar vídeo por AirPlay, de los sencillo que es compartir datos cércanos vía AirDrop, y se rumora que su implementación de energía inalámbrica será lo mejor y más práctico del mercado, permitiendo cargar los equipos desde distancias relativamente largas.

    Todo esto pone presión al mundo Android, incentivando la competencia, lo que tendrá como efecto una rápida transición hacia todo lo que sea inalámbrico en celulares, tabletas, e incluso laptops. Estamos llegando al momento en donde equipos como celulares vendrán totalmente sellados sin conector alguno. Toda esta transición tomará unos 3 años.

    Posibilidades: 85%


    18. El Samsung Galaxy S8 y Note 8, bestias de poder
    Después del desastre explosivo que fue el Note 7, la empresa sabe que sus próximos equipos van a tener que ser descomunalmente potentes y funcionales para poder volver a atraer a los que dejaron su campamento talibán, así que esperen el mayor avance de Samsung en celulares y phablets con sus próximos modelos.

    Esperen pantallas curvas en todos los modelos, resolución 4K en el Note 8, lectores de huellas dactilares detrás de la pantalla al igual que el próximo iPhone, parlantes estéreo como el iPhone 7, conector de audio digital como el iPhone 7, cámaras duales como el iPhone 7 Plus, y otras mejoras.

    Posibilidades: 80%


    19. Google AI, contra los que se medirán los otros
    De entre los asistentes virtuales, el Google Assistant será la barra alta contra la cual se medirán los demás, aunque (lean el próximo punto)...

    Posibilidades: 70%


    20. Amazon Alexa, el más práctico asistente digital hogareño
    Aunque quizás no el más potente asistente digital , Alexa será el más práctico de todos para cosas cotidianas del hogar, desde pedir canciones y el estado del tiempo, hasta reproducir radio, buscar datos básicos y automatizar nuestros hogares. Quizás Amazon se anime y por fin saque la versión internacional de su Amazon Echo en otros idiomas...

    Posibilidades: 75%


    21. El Nintendo Switch no va a ser tan popular como el Nintendo Wii
    El Nintendo Switch que sale a la venta en el primer cuarto del 2017 no creo será la solución al desastre que fue el Wii U, aunque sí es posible que venda más que este, pero dudo bastante que se acerque al éxito de ventas del origina Wii, y la razón tiene que ver con que el PlayStation VR será una mejor proposición en el 2017 para aquellos jugadores que quieren algo verdaderamente diferente.

    Aunque ojo, al inicio el Switch será un éxito de ventas, y todo fan de Mario y Zelda querrá uno, pero el dinero esta vez se irá más en querer probar también el PSVR que es mucho más novedoso.

    Posibilidades: 70%


    22. Por fin el Hogar Inteligente empieza a despegar
    Ya que tenemos a Amazon, Google y Apple en pelea, por fin tendremos al fin el despegue de los famosos Hogares Inteligentes que nos mencionaban por años...

    Posibilidades: 75%


    23. Machine Learning se agrega al currículum básico de ingeniería de software
    Para finales del 2017, la mayoría de instituciones educativas universitarias (que se consideren vanguardistas) incluirán la materia de Machine Learning (Aprendizaje de Máquinas) en sus curriculums estándar de ingeniería de software. ML será tan básico como AI (Inteligencia Artificial) y VR en esta segunda mitad de esta década en curso.

    Posibilidades: 60%


    24. Industria de la Salud y Machine Learning (más AI)
    Para este año 2017 se hará evidente (para los que aún siguen ciegos en este aspecto) que el mercado de la industria del cuidado de la salud se verá enorme y profundamente afectado por Machine Learning e Inteligencia Artificial.

    Posibilidades: 85%


    25. Big Data de mano con Machine Learning Y AI
    Este año también se hará evidente que Big Data va con Machine Learning y AI de la misma forma que el pan va con la mantequilla (o un mangú dominicano con salami y huevos fritos)...

    Sencillamente es imposible para seres humanos analizar tantos datos con simples procesos estadísticos u hojas de Excel; necesitamos reconocedores de patrones que van mucho más allá que ecuaciones lineales con simples variables de pocas dimensiones. Estamos ya en un punto en donde vamos a tener que confiar en máquinas para interpretar y descubrir patrones en nuestros datos masivos, y muchos se sentirán incómodos con la pérdida de control humana que eso significa.

    Posibilidades: 95%


    26. Continúa migración masiva a La Nube
    Continuando una tendencia de años recientes, en el 2017 no veremos un descanso, y al contrario veremos una adopción acelerada, de sistemas empresariales alojados en La Nube (plataformas en Internet fuera de tus premisas).

    Esto incluye todo tipo de sistemas informáticos, desde alojamiento de correos y coordinación de fuerza de ventas (como con GMail y Salesforce), hasta migraciones masivas de sistemas tipo ERP en la nube (como las versiones nube de Microsoft Dynamics) o infraestructura en la nube (como los servicios de Amazon AWS). Ya no hay marcha atrás.

    Posibilidades: 100%


    27. La mayoría de nuevos sistemas informáticos serán Cloud-First/Mobile-First
    Al menos que hablemos de novatos, o desarrolladores que se quieren apegar al pasado, para el 2017 la vasta mayoría de los nuevos sistemas informáticos estarían pensados con una filosofía Cloud-First/Mobile-First (CFMF, un término que me acabo de inventar), significando que serán diseñados desde cero pensando en ser consumidos desde la nube, y/o plataformas móviles. Las aplicaciones de escritorio serían ahora un nicho para aplicaciones muy específicas que requieren de procesamiento local (como Photoshop o AutoCAD), aunque incluso estas aplicaciones requerirán cada vez más de componentes de nube y móvil para ser más eficientes al largo plazo.

    Posibilidades: 90%


    28. Crecimiento de parques temáticos virtuales
    Este año pasado 2016 unas cuantas empresas empezaron a experimentar con Realidad Virtual en parques temáticos, y todo me dice que en menos de 3 años la gran mayoría de estos adoptarán la VR de alguna forma u otra, incluyendo los parques tradicionales.

    Posibilidades: 85%


    29. Slack a ser adquirida
    La herramienta Slack que permite la colaboración de equipos en la nube, es sencillamente demasiado apetitosa y es bastante probable que sea objetivo de adquisición en este 2017.

    Posibilidades: 65%


    30. Javascript se continúa solidificando, particularmente en la nube y móviles
    Gracias a Node.js y Amazon Lambda, y a frameworks móviles como Phonegap/Córdoba o el Ionic Framework, Javascript continuará solidificándose como el lenguaje de programación más importante a aprender junto a C, Java, Swift, y hasta cierta medida, Python. Incluso, Javascript poco a poco se está convirtiendo en un reemplazo para todos esos otros lenguajes, con la posible excepción de C/C++ (que continuará reinando en el nicho de "system-level software" como drivers, sistemas operativos, etc, hasta tanto se adopte más a Swift para esos fines).

    Posibilidades: 95%


    31. Los IDEs en la nube despegan con la versión Facebook de Atom
    Desde hace varios años vengo vaticinando la importancia que eventual tendrán los IDEs (los entornos integrados de desarrollo de software) alojados en la nube, y creo que en el 2017 por fin veremos el primer despegue semi-notable de esta tecnología (que en mi opinión será la forma de desarrollar software hacia futuro) gracias a las adaptaciones que hizo Facebook al Web IDE Atom (y que llama "Atom in Orbit"), que permite que ahora funcione 100% alojado en la nube por medio de navegadores web, y con acceso a recursos locales.

    Posibilidades: 60%


    32. Mundos virtuales sociales en VR por fin una realidad
    Hasta ahora cuando hablamos de uno interactuar en "mundos virtuales" realmente nos referíamos a ser parte de alguna experiencia en la pantalla de una PC, laptop, tableta o incluso celular, utilizando un avatar (un personaje digital que nos representa en un mundo virtual), renderizado en 3D como la mayoría de juegos de consolas como la PlayStation e o el Xbox One, pero eso en el 2017 empezará a cambiar...

    A diferencia de juegos o mundos que vemos en nuestras pantallas (como el clásico y adelantado-a-su-tiempo Second Life), ahora gracias a gafas de Realidad Virtual de bajo costo (y eventualmente a tecnologías de Realidad Aumentada/Mixta más prácticas y baratas), por fin podremos estar literalmente *dentro* del mundo virtual, el cual podría ser desde una versión mejorada de SecondLife hasta una nueva red social virtual (como vTime o Facebook VR), o desde una experiencia cinemática interactiva hasta una nueva generación de vídeo-juegos.

    Posibilidades: 70%


    33. Adicción a VR se empieza a notar con en PSVR
    Una de las más grandes preocupaciones entre expertos de Realidad Virtual es los posibles efectos adictivos de la VR en la población.

    Hasta ahora eso nunca había sido un problema debido al nicho que era la VR y al contenido relativamente limitado en existencia, pero ahora con la salida del PlayStation VR (PSVR) creo que este 2017 será el año en que veremos muchos padres protestando de que sus hijos están "perdidos" dentro del PSVR por horas muertas.

    Posibilidades: 70%


    34. Surge necesidad de poner un rating a las experiencias virtuales
    A diferencia de una película de terror, en donde tú ves todo en una pantalla y tu persona sabe que estás simplemente viendo algo, y que de asustarte solo tienes que mirar para otro lado, con la Realidad Virtual existe un fenómeno totalmente diferente en donde tu persona literalmente se siente "dentro" de la experiencia. Eso significa que (siguiendo el ejemplo de una película o experiencia de terror) que cualquier evento psicológico es amplificando exponencialmente, lo que puede provocar emociones extremadamente fuertes, y lo peor de todo es que si miras a otro lado tu continúas dentro del mundo virtual, creando esto una sensación bastante intensa que juega con tus sentidos e instintos.

    Esto va a conllevar a que tarde o temprano veamos casos de personas que sufrirán grandes daños psicológicos (o incluso físicos, en el caso de ataques cardíacos) que conllevarán a que tarde o temprano se tenga que legislar para que las experiencias virtuales sean clasificadas de forma similar a como se clasifican películas y programas de TV, con el agregado de que se tendrá que especificar claramente los riesgos que se corren al experimentar tal experiencia.

    Y noten que yo mismo ya he probado varías experiencias terroríficas en VR, y les puedo asegurar que a diferencia de algunas películas, estas crearían efectos bastante potentes en mentes jóvenes o que no son aptas para este tipo de experiencias.

    Posibilidades: 70%


    35. Televisores UHDTV 4k sobrepasan en ventas en navidad a HD 1080p
    Al menos en los EEUU, los Televisores UHDTV 4k sobrepasarán en ventas a los TVs Full-HD 1080p en esta navidad 2017, y en el resto del mundo representarán una parte significativa de las ventas totales (entre el 20% y 50% de unidades vendidas).

    Posibilidades: 70%


    36. Magic Leap por fin sale al mercado, pero decepciona
    Magic Leap tiene ya par de años en modo sigiloso, y amasado alrededor de US$1,000 Millones de dólares en inversionistas de alto rango, sin embargo cuando por fin salga este año es posible que decepcione y no llene las expectativas de los videos promocionales que hizo, y que no sea marcadamente superior al HoloLens de Microsoft.

    Posibilidades: 65%


    37. La cocina robótica
    Desde de décadas vaticinándose en la cultura popular por medio de programas de TV como Los Jetsons, y gracias a experimentos recientes, la cocina robotizada empezará a despegar en el 2017, aunque ojo, todavía no tanto en hogares, sino más bien en restaurantes y a escala industrial.

    Posibilidades: 50%


    38. Noticias ficticias crecen a ser un verdadero dolor de cabeza
    He aquí una estadística personal: Yo diría que aproximadamente el 70% de todas las noticias que mis amigos me envían por redes sociales y mensajería, son falsas. Y este es un problema que está pasando de ser una curiosidad (o molestia), a un verdadero problema que está teniendo graves consecuencias para toda la población.

    Incluso, se estima que buena parte de reciente batalla electoral en los EEUU se peleó en Facebook, con cientos de noticias falsas favoreciendo a uno que otro candidato.

    Y este problema apenas está empezando y se acentuará en el 2017, particularmente debido a que los que reciben estas noticias rara vez se molestan en revisar las fuentes, o incluso a utilizar el sentido común, prefiriendo mejor creer lo que quieren y de paso seguir esparciendo este tipo de noticias hasta convertirlas en virales, lo que de paso se ha convertido en un gran negocio para los spammers.

    Posibilidades: 90%


    39. Trump vs la industria científico-tecnológica
    Si Donald Trump realmente planea cumplir su palabra en estos aspectos, la industria científico-tecnológica librará una dura batalla contra el Presidente-electo de EEUU en este 2017, y al final de cuentas (sea este año o en posteriores), serán los tecnolólogos los que ganen esa batalla.

    Posibilidades: 65%


    40. Trump crea inestabilidad con China
    No podemos dejar de hablar de Trump sin mencionar a China, quien para los propósitos de estas predicciones y tendencias la vamos a ver como uno de las naciones titanes de la tecnología actual, y si nos llevamos de las palabras de Trump, es posible que se cree una inestabilidad económica mientras ambas naciones se preparan para mostrar quien tiene los testículos más grandes a la hors de negociar.

    Posibilidades: 55%


    41. La automatización va a crear nuevos tipos de empleos
    El 97% de los analistas dicen que conforme continuemos automatizando todo, que se perderán más empleos humanos y que nos acercamos a una grave crisis laboral. Yo soy de la minoría que opina que nuevos tipos de empleos se crearán, que la fuerza laboral si declina será solo temporalmente (hasta que una nueva generación pueda ser entrenada en nuevos tipos de empleos rápidamente, gracias a tecnologías como la VR), y que al final todo estará bien. Llámenme optimista.

    Posibilidades: 90%


    42. Crecimiento de eventos en vivo difundidos por Realidad Virtual
    Otra tendencia que veremos en el 2017 es el crecimiento de la difusión de eventos en vivo por medio de VR, desde eventos deportivos hasta conferencias y conciertos.

    Posibilidades: 98%


    43. Uber imparable
    Uber seguirá su expansión global, y los taxis continuarán quejándose (nada nuevo aquí). Veremos de paso a varias empresas de taxis sacando Apps similar a la de Uber, y otras incluso adoptando un modelo de negocio similar al de Uber. Sin embargo para la mayoría será demasiado tarde.

    Posibilidades: 80%


    44. Amazon AWS convirtiéndose en ?el Windows de Apps en la Nube?
    Así como una vez Windows reinó en la Era de las Apps de escritorio, Amazon Web Services (o Amazon AWS) se empezará a ver como el Windows de Apps en la Nube.

    Posibilidades: 85%


    45. Microsoft Azure a ser el competidor principal de Amazon AWS
    La plataforma Microsoft Azure (y no los servicios de Nube de Google) se verá como el principal competidor de Amazon AWS, aunque continuará en un segundo lugar debido a la gran ventaja en años, experiencia, confianza de clientes e inversión e infraestructura que le lleva Amazon. Sin embargo aun así, Microsoft Azure tendrá un crecimiento estelar en el 2017.

    Posibilidades: 80%


    46. Apple iMessage en Android
    No se si Apple planea hacer esto, por lo que esto es un deseo más que una predicción, pero si Apple desea que iMessage crezca, y que tenga oportunidad contra Whatsapp, va a tener que sacar una versión para Android tarde o temprano.

    Posibilidades: 65%


    47. Anuncios en Facebook continúan haciendo daño a los anuncios de Google
    Hace años que la plataforma de Facebook la considero más efectiva que la de Google en muchos aspectos en términos de colocación de anuncios, pero ya para el 2017 se hará evidente que el crecimiento de Facebook en este espacio será tal que por primera vez le empezará a ocasionar serios daños a los ingresos de Google AdSense/AdWords, lo que podría eventualmente afectar el valor de las acciones de la empresa.

    Posibilidades: 80%


    48. El Alexa Phone
    ¿Recuerdan el celular de Amazon que predije tan pronto lo vi que sería un desastre debido a su precio y funcionalidad, y que fue cancelado apenas semanas después debido a su extremadamente mal recibimiento en el mercado? Pues creo que el momento está apropiado para que Amazon utilice su marca Alexa para hacer un nuevo intento con un celular más sencillo, de más bajo costo (por debajo de US$199 dólares), y con su diferenciación estelar siendo el asistente digital Alexa. Sin embargo aunque creo el tiempo es apropiado, es también posible que Amazon prefiera concentrar sus esfuerzos mejor en otros productos con Alexa que no sean celulares.

    Posibilidades: 50%


    49. Apple reemplaza Intel x86 por ARM en algunas de sus Macs
    Apple es una empresa que le gusta controlar todo su ecosistema, y una dependencia que tiene actualmente es la de sus procesadores en su linea Mac, los cuales dependen de su proveedor Intel y la arquitectura x86, pero como he vaticinado por años aquí en eliax, en algún momento Apple haría sus procesadores seria ?A? tan potentes que estos podrían reemplazar lo mejor de Intel, particularmente al atar a estos íntimamente con macOS, y creo que el momento propicio para ellos es precisamente este 2017, en donde Apple podría transicionar o toda su linea o al menos su linea de MacBooks a estos procesadores con arquitectura ARM.

    Habría que ver si Apple dependería de terceros para que simplemente recompilen sus Apps para que funcionen con macOS ARM, or si proveerá algún tipo de emulación x86 para aplicaciones ?clásicas?.

    Una ventaja que tiene Apple es que podría obtener un mismo chip con 8 o incluso 16 o 32 núcleos (para modelos ?Pro?) a un precio similar o menor que un procesador Intel de 2 o 4 núcleos, de paso incluyendo en estos chips circuitos especializados para acelerar funciones específicas del sistema operativo para hacer de estos en la práctica tan rápidos o más rápidos que chips de Intel.

    Posibilidades: 60%


    50. Más de 5 millones de vehículos eléctricos
    Si el Tesla Model 3 sale a tiempo al mercado, es posible que en el 2017 nos acerquemos a la mágica cifra de 5 millones de autos vendidos en el mundo, y en menos de 3 años después a 10 millones.

    Posibilidades: 70%


    51. Chatbots de Inteligencia Artificial en los Call Centers
    Conforme aumenta la inteligencia, eficiencia y especialización de los chatbots (programas de chat que chatean o hablan contigo como si fueran seres humanos), estos empezarán a desplazar trabajos humanos de los Call Centers, ahorrando costos y ofreciendo un servicio más rápido y eficiente (no porque el chatbot sea más rápido o eficiente, sino porque es posible lanzar cientos o miles de ellos a la vez para así no hacer esperar a los clientes en lineas de llamada o pantallas de chat). Y como sub-predicción, veremos por el momento muchos humanos quejándose de que prefieren hablar mejor con un ser humano?

    Posibilidades: 65%


    52. Whatsapp aumenta el dolor de cabeza de las empresas de telecomunicaciones tradicionales
    Como vaticinado en varios artículos anteriores acá en eliax, y en mis charlas, estamos rápidamente llegando a un punto en donde las compañías tradicionales de telecomunicaciones (que ofrecen teléfonos alambrados, servicios a celulares, internet, y TV por cable) empezarán a perder sus mayores fuentes de ingresos y empezarán a transformarse en empresas que simplemente moverán datos por la red.

    Esto se debe a que programas modernos de mensajería ya están haciendo casi innecesario el hacer llamadas de larga distancia por los medios tradicionales, así como hace tiempo ya hicieron obsoleto el mercado de los minimensajes SMS/MMS. Y como si fuera poco, servicios como Netflix, Amazon Video, HBO Go, Hulu y iTunes están poco a poco terminando la dependencia de TV por cable.

    Pero el golpe más duro quizás va a ser la creciente adopción de llamadas de voz y video por Whatsapp, que poco a poco se está convirtiendo en el estándar por defecto para la comunicación global inter-plataforma.

    Y a propósito, me gustaría que llegara el tiempo cuando Whatsapp ya no depende de números celulares para la identificación de sus usuarios, y que mejor opte por un nombre similar a los utilizados en Twitter.

    Posibilidades: 95%


    53. Drones todavía no despegarán en el mercado de entrega rápida
    Aunque empresas como Domino y Amazon ya han realizado demostraciones de entrega de comida y artículos electrónicos por vía de drones, todavía faltan muchos obstáculos por resolver (desde temas de baterías, hasta temas de la seguridad de los drones), y no veo factible la tecnología por el momento para lanzamientos prácticos en consumidores, particularmente a gran escala, al menos no en el 2017.

    Posibilidades: 85%


    54. Drones para servicios de emergencia
    Sin embargo (siguiendo el punto anterior), un área en donde los drones ya podrían utilizarse desde ya en ambientes prácticos, es para servicios de emergencias, como por ejemplo para labores de identificación y reconocimiento en zonas de desastre, hasta entrega de medicinas urgentes, y en el 2017 veremos más avances y pruebas en estos aspectos en varios lugares del mundo.

    Posibilidades: 70%


    55. Experimentos genéticos con humanos arrancan
    En los últimos 3 años se han desarrollado unas herramientas que por primera vez hacen práctico el experimentar con el genoma humano de forma relativamente eficiente, y en el 2017 escucharemos de reportes de experimentos con humanos utilizando estas herramientas, particularmente en el sector salud.

    Posibilidades: 70%


    56. Energía solar continúa más eficiente
    En lo que es una tendencia que llevo años mencionando, el 2017 no será excepción y veremos paneles solares cuya eficiencia por dólar será posiblemente un 30 a 50% superior a los modelos de inicio del 2016.

    Posibilidades: 80%


    57. eSports crece
    eSports, o deportes de índole digital o electrónico (como son los torneos de video-juegos), continuarán con un rápido crecimiento a nivel mundial en el 2017.

    Posibilidades: 90%


    58. El problema del Ransomware se acentuará en el 2017
    Ransomware, o el concepto de un secuestro de tus datos digitales (que te liberan solo si pagas una suma a los que secuestraron los datos de tu PC o celular), incrementará enormemente en el 2017, y se extenderá más allá de dispositivos tradicionales, incluyendo incluso televisores inteligentes (en este diciembre pasado ya ocurrió el primer caso de un ramsomware que se apoderó de la Smart TV de una persona, y se negaba a dejar ver TV al menos que se le pagara una suma a los creadores del troyano que se instaló en la TV).

    Posibilidades: 90%


    59. Gran inversión y nuevas empresas en autos autónomos
    La industria de los automóviles autónomos está al rojo vivo, y continuaremos viendo una fuerte actividad en startups y una fuerte inversión en la industria.

    Posibilidades: 95%


    60. Desarrollo de medios más naturales de manejar la Realidad Virtual
    La VR es genial, pero algo que hay que resolver es el tema de uno interactuar de forma más natural en los entornos virtuales, y en el 2017 veremos nuevas investigaciones, prototipos y dispositivos que más natural manipular los mundos virtuales, como son guantes cómodos y flexibles.

    Posibilidades: 90%


    61. Más uso biométrico a nivel gubernamental
    Lo quieran o no, el uso de datos biométricos (desde huellas digitales hasta escaneo del iris en los ojos, y desde venas en tu piel hasta voz) continuará creciendo como una forma de identificar los ciudadanos de los distintos gobiernos del mundo. Sub-predicción: Muchos empezarán a llamar a este tipo de tecnologías ?La Marca del Demonio??

    Posibilidades: 80%


    62. Social Media como un departamento estándar en las empresas
    Hoy día el concepto de Social Media es tratado como un simple subconjunto o de Relaciones Públicas o de Mercadeo, pero como se está haciendo evidente desde hace tiempo estamos llegando al momento en donde las empresas se darán cuenta que necesitan de todo un departamento solo para manejar este concepto en sus empresas, dada la importancia que tienen las redes sociales hoy día en la interacción con clientes. Incluso veremos equipos de transformación digital que se encargarán de ayudar a las empresas a dar los pasos necesarios para lidiar con esta nueva realidad.

    Posibilidades: 80%


    63. Líderes tecnológicos como los nuevos líderes del mundo de los negocios
    Hasta hoy los líderes de empresas como Apple, Google, Amazon, Uber, Netflix y otros titanes siempre han sido visto como una raza alieníjena en el mundo de los negocios, y no ?verdaderos hombres/mujeres de negocio? y que sus fortunas fueron simplemente causadas por fortuitas buenas ideas, sin embargo cada vez se hará más evidente que estos son los nuevos líderes de negocio, y que sus estrategias de utilizar la tecnología como la nueva forma de hacer negocios es la nueva forma de hacer negocios.

    Posibilidades: 80%


    64. Whatsapp For Business
    No se por qué han tardado tanto, pero es hora de que Whatsapp saque una versión ?Whatsapp Para Negocios? que permita mejor centralizar las labores empresariales de comunicación con los clientes para de forma definitiva dejar de depender del sistema telefónico tradicional a la hora de dar servicio al cliente (entre muchos otros usos más). Esto particularmente afectaría enormemente el negocio de los Call Centers a nivel de sus departamentos de IT.

    Posibilidades: 50%


    65. Netflix sigue creciendo a nivel global
    Netflix continuará solidificándose como la plataforma legal #1 a la cual acudirán los usuarios para consumir video por Internet.

    Posibilidades: 90%


    66. Kodi a forzar a los estudios a reinventar la distribución de cine
    Kodi hoy día es la plataforma que permite descargas ilegales quizás más popular para ver películas pirateadas por Internet, y aunque es un tanto complicado configurarlo la primera vez, después del hecho su uso es bastante sencillo (muchos lo consideran incluso más sencillo que Netflix en algunos aspectos), permitiendo que las personas que no desean esperar a que lleguen las películas a sus mercados, o que no quieran pagar sumas exhorbitantes en cines, o que quieran versiones subtituladas de sus películas favoritas, puedan hacerlo de forma sencilla. Este modelo de sencillez obligará a los estudios a que reformen su forma de distribuir películas, hacia un concepto más global y estandarizado, sin embargo este proceso podría tardar años.

    Posibilidades: 70%


    67. Amazon Video a crecer fuera de EEUU
    Amazon Video crecerá de forma estable en mercados fuera de los EEUU, pero todavía no representará una amenaza real al dominio de Netflix en el 2017.

    Posibilidades: 90%


    68. Inteligencia Artificial contra el malware
    Malware (programas malignos como virus, troyanos, ramsonware, etc) es cada vez más numeroso y sofisticado, y estamos en un punto en donde estos programas ya evolucionan por si mismos para no ser detectados, lo que hace la labor humana para detectarlos cada vez más difícil, así que veremos el crecimiento de un mercado que utilizará IA (particularmente con el tema de Machine Learning) para detectar de forma automatizada patrones de software que podrían ser malignos, y detenerlos antes de que hagan daño.

    Posibilidades: 80%


    69. Primeros pasos de la revolución en cargamento con camiones autónomos
    Así como vimos una revolución con Uber en sistema de taxis, así veremos una revolución nacer en el mercado del transporte de carga (por medio de camiones con furgones) por medio de camiones automatizados con conductores/choferes humanos. Esto permitirá un sistema mucho más eficiente y seguro de transporte, sin los peligros de conductores que se duermen en el camino, o que se roban la mercancía, o que llegan tarde a sus destinos. Esperen las huelgan de los gremios de tales industrias similar a lo que le pasó a Uber.

    Aunque ojo, en el 2017 no veremos este mercado desarrollado aun, pero sí veremos los primeros prototipos de pruebas (ya algunos se han presentado de forma limitada en el 2016).

    Posibilidades: 70%


    70. Bitcoins continúan creciendo, aunque muchos invertirán sin saber por qué
    El mercado de comprar bitcoins como una inversión a futuro continuará creciendo, aunque con la incertidumbre de uno saber si algún día rendirá frutos, o cuándo uno retirarse antes de algún posible colapso.

    Posibilidades: 85%


    71. Blockchain, no bitcoins
    Como predije el año pasado (y previamente en mis charlas de años anteriores), quiero hacer énfasis otra vez este año (dada su importancia y los próximos puntos a continuación), que el gran adelanto del Bitcoin no fueron los bitcoins mismos, sino su protocolo Blockchain (?Cadena de Bloques?) que hizo posible los bitcoins (y otras monedas y modelos de transacciones) en primer lugar, y será el uso de Blockchain lo que continuará el modelo de innovación en el campo financiero (y otros campos) hacia futuro.

    Posibilidades: 95%


    72. Intento de regulación de bitcoins por los estados
    Tarde o temprano (si llega el momento en que monedas virtuales como bitcoin empiezan a valer una parte significativa de las transacciones globales), los estados empezarán a presionar para legislar el uso de estas, aunque notarán rápidamente que no será tan fácil como regular las monedas tradicionales, y una forma que tratarán de hacerlo es por medio del uso de Blockchain en transacciones que van más allá de los bitcoins mismos, y que se extiendan a todas las transacciones con (y entre) instituciones financieras para incluir incluso las cosas más mundanas entre consumidores.

    Posibilidades: 60%


    73. Startups de Blockchain a valer millones y ser adquiridas rápidamente
    Startups (en este caso, nuevas empresas de índole tecnológico) que se concentren en la tecnología Blockchain empezarán a valer muchos millones de dólares y a ser adquiridas rápidamente tal cual sucedió alguna vez con empresas especializadas en otros nichos en crecimiento. (como VR, AR, ML y AI)

    Posibilidades: 85%


    74. Los inicios de aplicaciones distribuidas con Blockchain
    En el 2017 veremos mucha experimentación en aplicaciones verdaderamente distribuidas por Internet utilizando como base el concepto del Blockchain, y en ese espacio la empresa que empezará a mostrar el camino será Ethereum.

    Posibilidades: 75%


    75. De Cadena de Suministro a Cadena de Bloques
    El concepto de Blockchain empezará a ser utilizado cada vez más en el mundo de las cadenas de suministro tradicionales, siendo adoptado poco a poco en todo proceso que pueda ser beneficiado por una bitácora fidedigna de pasos transaccionales, lo que significa que veremos Blockchain haciendo su aparición en no solo la industria financiera, sino además la industria de la salud, la industria del detalle, el sector público, y el sector educativo, entre otros.

    Posibilidades: 95%


    76. Eliax y oportunidades en Realidad Virtual
    Unos rumores dicen que posiblemente me involucre en algún proyecto de Realidad Virtual en el cual invite a algunos de ustedes a participar, así que atentos en las próximas semanas... ;)

    Posibilidades: 85%


    77. Primera impresora 3D por menos de US$99
    Ya tenemos impresoras 3D por menos de US$250 dólares, pero estoy seguro que alguna empresa (quizás una bajo el radar) debe estar pensando en crear una sencilla impresora 3D que cueste US$99 dólares. ¿Cómo? Pues no necesariamente haciendo dinero en la impresora, sino más bien en los materiales de impresión. Me imagino que tal empresa debería también incluir alguna herramienta en forma de App que permite hacer objetos de forma bastante sencilla, o al menos personalidad objetos pre-existentes. Si no vemos esto este año, quizás el próximo a más tardar.

    Posibilidades: 65%


    78. El Apple AirCharger
    Estuve pensando después de lo que escribí sobre el iPhone 8, que nada impediría a Apple utilizar la misma tecnología inalámbrica de este en otros dispositivos, y de paso ofrecer un dispositivo al que llamo el AirCharger que permite recargar la batería de estos de forma inalámbrica. Un ejemplo sería un nuevo teclado, un nuevo ratón, una nueva trackpad, y quizás incluso hasta un Apple Pencil 2 (¿el "Air Pencil"?).

    Posibilidades: 70%

    ---

    Así que ahí tienen la lista. Como es tradición ya, revisaremos cuántas de estas se hicieron realidad a finales del 2017. Hasta entonces, ¡disfruten de la vida!

    Leer 39 Comentarios


              La biométrie et le « machine learning »: la combinaison gagnante de Gemalto pour plus de confiance dans les services bancaires en ligne   
    Une solution d'évaluation des risques qui utilise le big data pour comprendre le comportement de l'utilisateur et adapter la méthode d'authentification bancaire en conséquence. Maintenant, les banques peuvent adapter l'authentification à chaque utilisateur pour offrir une expérience clien...
              Cognitive Search : quel rôle joue le Machine Learning   
    Selon le cabinet d'analystes indépendants Forrester, le Cognitive Search est la nouvelle génération de solutions d'Enterprise Search qui s'appuient sur l'intelligence artificielle.
    Cognitive Search : quel rôle joue le Machine Learning
    La question du potentiel et de l'intérêt des technologies d'intelligence artificielle (IA) - comme le Machin...


              Zymergen continues SynBio fundraising frenzy with $130M Series B    
    Zymergen has raised $130 million to advance its machine learning-enabled microbe design business. The investment, which comes less than 18 months after a $44 million Series A, will enable Zymergen to grow its capacity and team in a bid to cement itself as the operation businesses turn to when they need a microbe engineered to produce an ingredient.
              Microsoft、SQL Server 2017の「Machine Learning Services」を使った「データサイエンス実践ガイド」を公開   

    Microsoftは、「SQL Server 2017 CTP 2.0」の主要機能の1つである「Microsoft Machine Learning Services」を使って、エンドツーエンドのデータサイエンスソリューションを構築する方法を学べる実践ガイドを紹介した。


              Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage   
    Hedge funds are testing new quantitative strategies that could supplant traditional fund managers
              VoD software market may expand to $ 7.5 bn by '22, A-Pac leads   

    MUMBAI: The global video streaming (VoD) software market size is expected to grow from USD 3.25 billion in 2017 to USD 7.50 billion by 2022, at a Compound Annual Growth Rate (CAGR) of 18.2%. The major factors driving the video streaming software market are increasing traction of VaaS in enterprises due to lower cost of ownership, extensive growth of online videos, and growing needs for on-demand streaming. However, network connectivity issues and the technical difficulties involved in video streaming are some of the major factors hindering the growth of the video streaming software market, according to ReportLinker study.

    Increasing traction of Video-as-a-Service (VaaS) in enterprises due to lower cost of ownership, the extensive growth of online videos, and growing needs for on-demand streaming are driving the video streaming software market.

    Video Analytics is expected to witness the highest growth rate during the forecast period: The video analytics solutions segment is expected to have the highest growth rate during the forecast period, as video analytics solutions offer a 360-degree view of enterprise viewer habits and behaviors, producing critical intelligence to support enterprise strategic goals. Through video analytics, enterprises can club Artificial Intelligence (AI), machine learning, and cognitive technologies to extract actionable insights from the video files.

    Broadcasters, operators, and media vertical is expected to have the largest market share in 2017: The broadcasters, operators, and media vertical is expected to witness the highest adoption of video streaming software, as the video streaming software helps broadcasters, operators, and media companies to maximize monetisation, minimize operational overheads, offer better services, and enhance viewing experiences.

    Asia Pacific (APAC) is expected to grow at the highest CAGR: The APAC region includes emerging economies such as China, Australia, Singapore, and India. In these countries, enterprises are rapidly deploying video streaming software solutions. APAC is expected to grow at the highest CAGR during the forecast period. This is mainly due to the increasing adoption of advanced technologies, growing usage of digital media among organizations and individuals, and the rising awareness about business productivity. In terms of market size, North America is expected to lead the video streaming software market in 2017.

    In-depth interviews were conducted with Chief Executive Officers (CEOs), marketing directors, innovation and technology directors, and executives from various key organizations operating in the video streaming software market.

    The breakup of the profiles of the primary participants is given below:

    • By Company: Tier 1 – 24%, Tier 2 – 41%, and Tier 3 – 35%
    • By Designation: C-Level – 57%, Director Level – 36%, Others – 7%
    • By Region: North America – 49%, Europe – 28%, APAC – 16%, RoW – 7%

    The key video streaming software providers profiled in the report are as follows:
    Anvato, Inc. (Mountain View, US), BoxCast (Cleveland, US), Brightcove, Inc. (Boston, US), Contus (Chennai, India), DaCast (San Francisco, US), Haivision, Inc. (Montreal, Canada), IBM Corporation (New York, US), Kaltura, Inc. (New York, US), Kollective Technology, Inc. (Bend, US), KZO Innovations (Reston, US), MediaPlatform (Beverly Hills, US), Ooyala, Inc. (Santa Clara, US), Nuvola Media PTE Ltd. (Singapore), Panopto (Pittsburgh, US), Polycom, Inc. (San Jose, US), Qumu Corporation (Minneapolis, US), Ramp (Boston, US), Sonic Foundry, Inc. (Madison, US), StreamShark (Victoria, Australia), uStudio, Inc. (Austin, US), VBrick (Herndon, US), VIDIZMO, LLC. (Sterling, US), Vzaar (London, UK), Wowza Media Systems LLC. (Colorado, US) and YuJa (San Jose, US).

    http://www.indiantelevision.com/sites/drupal7.indiantelevision.co.in/files/styles/300x300/public/images/tv-images/2017/06/27/VOD.jpg?itok=qjIfhGbf

              Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage   
    Hedge funds are testing new quantitative strategies that could supplant traditional fund managers
              Machine Leaning Specialist​   
    CA-Santa Clara, Machine Leaning Specialist​ ​Location: Santa Clara, CA 3 - 6 month contract to hire embedded (Raspberry Pi) experience is a huge plus. Most importantly is the experience in computer vision, deep neural networks Experience developing applications utilizing Artificial Intelligence, Computer Vision, Machine Learning, Image Processing, and/or Computer Graphics Experience with mobile device management,
              Advanced Gesture Recognition in iOS   
    DollarP-ObjC is an Objective-C port of the $P gesture recognizer to be used in iOS applications. What is $P? From the $P website: The $P Point-Cloud Recognizer is a 2-D gesture recognizer designed for rapid prototyping of gesture-based user interfaces. In machine learning terms, $P is an instance-based nearest-neighbor classifier with a Euclidean scoring function, […]
              Machine Learning Meets the Lean Startup   
    We just finished our Lean LaunchPad class at UC Berkeley’s engineering school where many of the teams embedded machine learning technology into their products. It struck me as I watched the teams try to find how their technology would solve real customer problems, is that machine learning is following a similar pattern of previous technical infrastructure […]
              (USA-WA-Seattle) Technical Program Manager, Goodwill   
    **Intro:** Facebook's mission is to give people the power to share, and make the world more open and connected. Through our growing family of apps and services, we're building a different kind of company that helps billions of people around the world connect and share what matters most to them. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to make the world more open and accessible. Connecting the world takes every one of us—and we're just getting started. **Summary:** Our team (Goodwill) is responsible for creating product and brand experiences with care and compassion, from birthdays, Friends Day and anniversary videos to On This Day - and now through messages at the top of people’s News Feed on moments and events that people care about. The purpose of the team is to create an environment that allows people to authentically express their feelings and connect with what matters to them. Are you passionate about helping those around you be more effective in achieving our shared goals? Do you think about technology and data as a way to increase efficiency and change how a business operates? We are looking for candidates that share our passion for tackling complexity head-on, to help build platforms that can scale through multiple orders of magnitude. As a Technical Program Manager, you will play a key role within our engineering organization, driving launches of high impact projects to hundreds of millions of people per day. This is a full-time position based in our office in Seattle and will report to the lead Engineering Manager in Menlo Park. **Required Skills:** 1. Develop and manage end-to-end ranking and infra project plans and ensure on-time delivery. 2. Provide hands-on program management during analysis, design, development, testing, implementation, and post implementation phases. 3. Perform risk management and change management on ranking and infra projects. 4. Provide day-to-day coordination and quality assurance for projects and tasks. 5. Manage XFN relationships and inter-team alignment on project schedule and deliverables. 6. Communicate ranking and infra project plans and updates between SEA/MPK teams. 7. Drive internal and external process improvements across multiple teams and functions. 8. Lead the initiatives to measure and improve the stability for high throughput systems. 9. Identify and advocate the best practices to improve engineering efficiency across the team. **Minimum Qualifications:** 10. B.S. in a Computer Science or equivalent experience. 11. At least five years of software engineering, systems engineering, program/product management, or similar experience. 12. Experience in ranking and personalization project planning and coordination. 13. Experience in backend reliability and/or performance work. 14. Communication skills and experience working with technical management teams. 15. Organizational and coordination skills along with multi-tasking capabilities to get things done in a fast-paced environment. 16. Analytical and problem-solving skills, exposure to large-scale systems and some experience writing code/queries. **Preferred Qualifications:** 17. Experience with Machine Learning model feature selection and performance evaluation. 18. Experience with large-scale video generation, storage, and delivery systems. 19. Experience with SQL, Hive, Presto, Tableau, R, or similar data processing tools. **Industry:** Internet **Equal Opportunity:** As part of our dedication to the diversity of our workforce, Facebook is committed to Equal Employment Opportunity without regard for race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity, or religion. We are also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations-ext@fb.com or you may call us at 1+650-308-7837.
              Researchers Think They Can Use Twitter to Spot Riots Before Police   

    Researchers in the UK used machine learning algorithms to analyze 1.6 million tweets in London during the infamous 2011 riots, which resulted in widespread looting, property destruction and over 3,000 arrests. According to the researchers, analyzing Twitter data to map out where violence occurred in London boroughs…

    Read more...


              (USA-WA-Seattle) data analyst, Retail Operations, Global Ops - Seattle, WA   
    **Summary of Experience** + Years within data analysis field or discipline (Minimum 1 year experience) **Basic Qualifications** + Education: BA/BS with concentration in quantitative discipline - Statistics, Math, Comp Science, Engineering, Econ, Quantitative Social Science or similar discipline **Required Knowledge, Skills and Abilities** + Ability to apply knowledge of multidisciplinary business principles and practices to achieve successful outcomes in cross-functional projects and activities + Exposure and business-applicable experience in several Modeling & Machine Learning Techniques (regression, tree models, survival analysis, cluster analysis, forecasting, anomaly detection, association rules, etc.) + Exposure and business-applicable experience in several data ETL (Teradata, Oracle, SQL, Python, Java, Ruby, Pig) + Exposure and business-applicable experience in several analytic languages (R, SAS, SPSS, Stata) + Big data processing techniques, preferred + Retail, customer loyalty, and eCommerce experience, preferred Starbucks and its brands are an equal opportunity employer of all qualified individuals, including minorities, women, veterans & individuals with disabilities. Starbucks will consider for employment qualified applicants with criminal histories in a manner consistent with all federal, state, and local ordinances.
              (USA-WA-Seattle) Applied Scientist   
    Amazon Web Services is looking for an individual with experience applying machine learning to large scale data sets. We are looking for a candidate that would like to apply this expertise to implement core technology for a customer facing service. If you are interested in researching, designing and implementing innovative solutions for never-before-solved problems, this will be an exciting opportunity. The AWS Security Services team builds technologies that help customers strengthen their security posture and better meet security requirements in the AWS Cloud. The team interacts with security researchers to codify our own learnings and best practices and make them available for customers. We are building massively scalable and globally distributed security systems to power next generation services. Key Responsibilities: · Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment · Collaborate with software engineering teams to integrate successful experiments into large scale, highly complex production services. · Report results in a matter that is statistically rigorous · Interact with Security Engineers and related domain experts to dive deep into the types of challenges that we need innovative solutions for + Strong programming skills with an object-oriented language + Experience with at least one of Apache Spark, Hadoop or Storm + Ability to develop prototypes by manipulating and analyzing complex, high-volume, high-dimensionality data from various sources. + Eager to learn new algorithms, new application areas and new tools + Excellent communication skills + Desire and energy to work in a fast-paced environment + This role is Seattle based + PhD in machine learning, statistics or a related quantitative field + Domain experience with threat detection techniques + Track record of developing novel algorithms to help detect stealthy zero-day attacks + 5+ years of relevant experience in industry AMZR Req ID: 552898 External Company URL: www.amazon.com
              (USA-WA-Seattle) Research Scientist   
    Amazon Marketplace enables over 2MM sellers and small businesses across the world to list their products for sale to Amazon customers. The Marketplace team is seeking a Research Scientist to use statistical and/or machine learning techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems, helping Marketplace Sellers offer customers great prices and optimizing fees to grow the business. This role will be a key member of an Advanced Analytics team supporting pricing and fee related business challenges within Marketplace and will be based in Seattle, WA. The Research Scientist will work closely with other applied scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to understand pricing dynamics and improving customer experience. The Research Scientist will also partner with technology and business leaders to solve business and technology problems using scientific approaches to build new services that surprise and delight our customers. Research science at Amazon is a highly experimental activity, although theoretical analysis and innovation are also welcome. Our scientists work closely with software engineers to put algorithms into practice. They also work on cross-disciplinary efforts with other scientists within Amazon. The key strategic objectives for this role include: + Understanding drivers, impacts, and key influences on pricing dynamics within Marketplace. + Optimizing Marketplace fees to improve Customer experience and grow the Amazon business. + Driving actions at scale to help provide low prices for Customers using scientifically-based data and decision making. + Helping to build production systems that take inputs from multiple models and make decisions in real time. + Automating quality feedback loops for algorithms in production. + Utilize Amazon systems and tools to effectively work with terabytes of data. MS in Computer Science, Mathematics, Statistics, Machine Learning, Economics, or a related quantitative field, PhD preferred. The ideal candidate will have 5+ years of relevant experience, including: + Strong ML algorithm development experience. + Demonstrated ability to implement different techniques in predictive analytics, forecasting, modeling & measurement. + Experience with experimental design. + Expertise in analyzing very large experimental and observational data sets. + Proficiency in one scripting language (Python, Perl, etc.). + Exposure to mathematical programming libraries (R, Matlab, Weka, SAS, etc.). + Familiarity with standard database languages such as SQL. + Strong verbal/written communication skills, including an ability to effectively collaborate with both research and technical teams. Preferred Qualifications include: + PhD in Computer Science, Mathematics, Statistics, Machine Learning, Economics, or a related quantitative field + Domain experience in pricing is helpful but not required. AMZR Req ID: 552592 External Company URL: www.amazon.com
              (USA-WA-Seattle) Senior Product Manager - Tech   
    Amazon has a diverse set of global businesses. The Technology Center of Excellence in the Enterprise Risk Management and Compliance (ERMC) team provides technology products and solutions to help these businesses address the compliance requirements. As a Senior Product Manager, you will play a leading role in building these technical products - you will work closely with Amazon businesses to meet legal audit and compliance requirements for areas like risk mitigation, fraud, screening, and controls. You will have a relentless focus on the customers and dive deep into the challenges they face. You will have the responsibility through the full product lifecycle, including setting product strategies, defining roadmaps, creating features, executing projects, directing the launch, and driving the adoption of products. You'll also leverage machine learning and data analytics in building these products. The Technology Center of Excellence in the ERMC is a rapidly growing function and team. The ideal Senior Product Manager will: - Excel and thrive in a fast-paced environment like Amazon - Possess exceptional analytical, project management, writing, and organizational skills - Be a team player and willing to roll up your sleeves and do whatever is necessary to get things done - Entrepreneurial spirit, with a track record of delivering results - Bachelor's degree in Computer Science, Computer Engineering, or a related field - 8+ years’ experience in software development, data science, technical program or product management, in a related business - 2+ years in Product Management - Ability to dive deep into technical concepts and architecture as well as communicate technical information effectively to less technical audiences - Previous experience working in a web services, cloud, and user interface design environment - MBA, MS, or advanced degree in a related field - Entrepreneurial spirit, with a track record of delivering results - Proficient in programming and scripting languages for data science and analysis including SQL - Lead and influence across different organizations and align diverse teams to a common goal - Handle competing priorities in a fast-paced and demanding environment Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation AMZR Req ID: 552567 External Company URL: www.amazon.com
              (USA-WA-Seattle) Data Engineer, Amazon Video   
    Amazon Video (AV) is a digital video streaming and download service that offers Amazon customers the ability to rent, purchase or subscribe to a huge catalog of videos. This position focuses on the rental and purchase side of the Amazon Video business. As a Data Engineer in Amazon Video, you will work directly with stakeholders and technical partners to design and implement cutting edge data solutions that provide actionable insights to the business. You will be leading the charge in making granular event data easily usable and accessible, and participate in developing the technical strategy to do so. You will work with a wide range of data technologies (e.g. Kinesis, Spark, Redshift, EMR, Hive, and Tableau) and stay abreast of emerging technologies, investigating and implementing where appropriate. Our ideal candidate has outstanding technical skills, analytical capabilities, business insight, and communication skills, and maintains a strong passion for technology. In this role you will: 1. Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. 2. Partner with analysts, applied scientists, data engineers, business intelligence engineers, and software development engineers across Amazon to produce complete data solutions. 3. Interface directly with stakeholders, gathering requirements and owning automated end-to-end reporting solutions 4. Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Oracle, Redshift, and OLAP technologies. 5. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. 6. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. + BS degree in information management, computer science, math, statistics, or equivalent technical field + 2+ years of relevant experience in business intelligence role, including data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions + Mastery of relevant technical skills, including SQL, data modeling, schema design, data warehouse administration, BI reporting tools (e.g. Tableau), scripting for automation + Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets + Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way + Excellence in technical communication and experience working directly with stakeholders + 3+ years’ experience in Oracle and Redshift including complex querying, analytical functions, and database tuning for optimal query performance with large data sets + 3+ years’ experience in Datanet or other ETL technologies + Experience with data processing using custom scripts to pull and load from APIs or Files. + Experience with Python or similar programming/scripting language(s). AMZR Req ID: 552093 External Company URL: www.amazon.com
              (USA-WA-Seattle) Applied Scientist   
    Seeking Applied Researchers to build the future of the Alexa Shopping Experience at Amazon. At Alexa Shopping, we strive to enable shopping in everyday life. We allow customers to instantly order whatever they need, by simply interacting with their smart devices such as Echo, Fire TV, and beyond. Our services allow you to shop, anywhere, easily without interrupting what you’re doing – to go from “I want” to “It’s on the way” in a matter of seconds. We are seeking the industry's best applied scientists to help us create new ways to shop. Join us, and help invent the future of everyday life. The products you would envision and craft require ambitious thinking and a tireless focus on inventing solution to solve customer problems. You must be passionate about creating algorithms and models that can scale to hundreds of millions of customers, and insanely curious about building new technology and unlocking its potential. The Alexa Shopping team is seeking an Applied Scientist who will partner with technology and business leaders to build new state-of-the-art algorithms, models and services that surprise and delight our voice customers. As part of the new Alexa Shopping team you will use ML techniques such as deep learning to create and put into production models that deliver personalized shopping recommendations, allow to answer customer questions and enable human-like dialogs with our devices. The ideal candidate will have a PhD in Mathematics, Statistics, Machine Learning, Economics, or a related quantitative field, and 5+ years of relevant work experience, including: · Proven track record of achievements in natural language processing, search and personalization. · Expertize on a broad set of ML approaches and techniques, ranging from Artificial Neural Networks to Bayesian Non-Parametrics methods. · Experience in Structured Prediction and Dimensionality Reduction. · Strong fundamentals in problem solving, algorithm design and complexity analysis. · Proficiency in at least one scripting languages (e.g. Python) and one large-scale data processing platform (e.g. Hadoop, Hive, Spark). · Experience with using could technologies (e.g. S3, Dynamo DB, Elastic Search) and experience in data warehousing. · Strong personal interest in learning, researching, and creating new technologies with high commercial impact. + · Track record of peer reviewed academic publications. · Strong verbal/written communication skills, including an ability to effectively collaborate with both research and technical teams and earn the trust of senior stakeholders. AMZR Req ID: 551723 External Company URL: www.amazon.com
              (USA-WA-Seattle) BI Engineer - II   
    At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a Sr. BI Engineer to be a part of the consumer Recruiting Ops Analytics & Reporting (ROAR) team that will build a BI & Data Analytics platform from scratch whereby enabling hiring of exceptional talent meeting the Amazon bar. Our success depends on our ability to manage and analyze the data that that we generate as well as curating external data sources for identifying the right resources for Amazon. This position requires the ability to dive deep large amounts of data, have a great business sense, and the desire to influence key strategic decisions with data-driven analysis. Working within the business teams and collaborating with key stakeholders across the company, you will have the opportunity to design and implement features to enhance the experience of Amazon Recruiters and prospective Employees. We offer a technologically-sophisticated, customer-focused, data-driven and friendly work environment. Sr. BI Engineers at Amazon work directly with a diverse scientific team including computer Engineers, as well as other data engineers and scientists. As a Sr. BI Engineer in Amazon recruiting you will partner with business, technology and recruiting leaders to identify future trends, improve turnaround time and the hiring efficiency. If you are excited about data and machine learning, are results oriented, and want to join a growing analytics team within Amazon - this role is for you. The ideal candidate will have excellent analytical abilities, outstanding business acumen and judgment, intense curiosity, strong technical skills, and superior written and verbal communication skills. He/she will have a strong bias toward data driven decision making. He/she will be a self-starter, comfortable with ambiguity, able to think big and be creative (while paying careful attention to detail), and will enjoy working in a fast-paced dynamic environment. · BS/BA Computer Sciences, Math, Statistics or related field · 3+ years of professional experience in a business analyst/data analyst/statistical analyst role · Excellent knowledge of SQL and exposure to Excel · Excellent communication (verbal and written) and interpersonal skills, and an ability to effectively communicate with both business and technical teams · Proven problem-solving skills, project management skills, attention to detail, and exceptional organizational skills · Proficient with one or more BI tools including Tableau/ MicroStrategy/ Power BI etc. · MBA / Master’s / PhD degree in a relevant field · Understanding of Big Data technologies and solutions (EMR, Hive etc.) · Understanding of Amazon Web Services (AWS) technologies · Experience working within a high-growth technology company AMZR Req ID: 551176 External Company URL: www.amazon.com
              (USA-WA-Seattle) Business Analyst, FBA Fees   
    Fulfillment by Amazon (FBA) leverages Amazon’s global fulfillment and customer service network for third party sellers who want to grow their business on and off Amazon.com. FBA enables customers to take advantage of Free Super Saver Shipping and Amazon Prime on third party items, while sellers can focus on selling rather than shipping. The FBA Fee team is looking for an experienced and self - driven Business Analyst to join the team. The candidate is expected to leverage the latest in data mining and predictive modeling techniques to enhance our current pricing calculation models. The individual will be responsible for developing quantitative models to improve our understanding of Seller Behavior and to support other ongoing analytical efforts of FBA Fees team. Ideally, the candidate should be comfortable with working with ambiguous data, and data from multiple sources. You would be expected to analyze large datasets, identify trends and patterns, and uncover insights for key business decisions. The candidate will work closely with teams in Product Development, Marketing, Business Strategy, Supply Chain and Software Development on a day-to-day basis. What we are looking for: + Experience in mining large quantities of data using SQL and other tools. (required). + Experience is using Statistical and Econometric Concepts to solve real life business problems. + Strong problem solving skills. + Someone who can think big and be creative (while paying careful attention to detail), and will enjoy working in a fast-paced dynamic environment. Key Responsibilities: + Drive development of quantitative models necessary for the evaluation and implementation of new pricing strategies + Develop tools to understand Sellers’ behaviors related to pricing changes + Collaborate with product managers to develop pricing recommendations for new features or services + Partner with finance and product management as a leader of quantitative analysis + Communicate with software developers to insure proper implementation of complex models + Analyze and solve business problems at their root, stepping back to understand the broader context + Write high quality code to retrieve and analyze data + Learn and understand a broad range of Amazon’s data resources and know how, when, and which to use + Manage and execute entire projects or components of large projects from start to finish including project management, data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations + M.S. in a quantitative field such as Economics, Analytics, Mathematics, Statistics or Operations Research. + At least 2 years of relevant experience in analytics using advanced forecasting, optimization and/or machine learning techniques + Experience solving complex quantitative business challenges + Verbal/written communication & data presentation skills, including an ability to effectively communicate with both business and technical teams + Experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with large-scale, complex data + At least 4 years of relevant experience in advanced forecasting, optimization and/or machine learning techniques. + Ability to build model prototypes using appropriate tools (R/SAS/Python…) + Knowledgeable in demand modeling, pricing optimization, and customer/product segmentation AMZR Req ID: 550158 External Company URL: www.amazon.com
              (USA-WA-Seattle) Software Development Engineer   
    Alexa is the groundbreaking cloud-based intelligent agent that powers Echo and other devices designed around your voice. Our mission is to push the envelope in Artificial Intelligence (AI), Natural Language Understanding (NLU), Machine Learning (ML), Dialog Management, Automatic Speech Recognition (ASR), and Audio Signal Processing, in order to provide the best-possible experience for our customers. We’re looking for a Software Development Engineer to help build industry-leading conversational technologies and machine learning systems that customers love. As a Software Development Engineer for the Alexa team, you will be responsible for translating business and functional requirements into concrete deliverables with the design, development, testing, and deployment of highly scalable distributed services. You will also partner with scientists and other engineers to help invent, implement, and connect sophisticated algorithms to our cloud based engines. Prior domain knowledge including AI, ML, and NLU is a preferred, though not required. However, strong motivation to learn ML, AI and NLU is critical for successful candidates. Candidates should also be very agile in developing flexible software with respect to scientific, experimentation methods and usage patterns. Additional responsibilities include: + Designing, developing and maintaining core system features, services and engines + Helping define product features, drive the system architecture, and spearhead the best practices that enable a quality product + Working with scientists and other engineers to investigate design approaches, prototype new technology, and evaluate technical feasibility + Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules + Bachelor's degree in Electrical Engineering, Computer Science, Mathematics, or related technical field + Familiar with programming languages such as C/C++, Java, Perl or Python and open-source technologies (Apache, Hadoop) + Experience with OO design and common design pattern + Knowledge with data structures, algorithm design, problem solving, and complexity analysis + Graduate degree (MS or PhD) in Electrical Engineering, Computer Sciences, Mathematics, or related technical field + Experience developing cloud software services and an understanding of design for scalability, performance and reliability + Experience defining system architectures and exploring technical feasibility trade-offs + Experience optimizing for short term execution while planning for long term technical capabilities + Ability to prototype and evaluate applications and interaction methodologies + Ability to produce code that is fault-tolerant, efficient, and maintainable + Academic and/or industry experience with standard AI and ML techniques, NLU and scientific thinking + Experience working effectively with science, data processing, and software engineering teams + Ability and willingness to multi-task and learn new technologies quickly + Written and verbal technical communication skills with an ability to present complex technical information in a clear and concise manner to a variety of audiences Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation AMZR Req ID: 547716 External Company URL: www.amazon.com
              (USA-WA-Seattle) Software Development Engineer   
    Amazon Global Selling (AGS) is focused on breaking down barriers to allow 3rd-party Sellers to sell their items to Customers around the world. The AGS team develops software that removes friction from the process of cross border selling for 3rd-party Sellers. The AGS team is responsible for development of systems that enable Sellers to expand their business to new customers around the world through increased exports and listing of their products for sale in new countries. We need your help to grow this business by building highly-available and scalable distributed systems that provide clean interfaces between Sellers, Customers and Amazon's software. Within AGS, the Global Selling Intelligence (GSI) team is responsible for building a highly-available, scalable artificial intelligence platform that reduces the complexity of adding Machine Learning (ML) to Global Selling products and services for cross-border sellers. We collect petabytes of data from a variety of data sources inside and outside Amazon including Amazon’s Product catalog, seller inventory, customer orders, and page loads. Our data and ML platform enables ML exploration and production by providing services for AGS ML and tech teams to access data and make predictions hundreds of thousands of times per day, using Amazon Web Service’s (AWS) Redshift, Hive, Spark, etc. AGS is seeking an outstanding Software Development Engineer to join the Global Selling Intelligence (GSI) team. In this role, you will work in one of the world's largest and most complex data environments. You will apply your deep expertise in the design, creation, and management of large datasets to build highly-available systems for the extraction, ingestion, and processing of data at Amazon scale. In this role, you will own the end-to-end development of solutions to complex problems and play an integral role in strategic decision making. You will lead and mentor junior engineers and lead communications with management and other teams. + Bachelor’s Degree in Computer Science or related field + 3+ years of software development experience in at least one modern programming language (Python, Java, Scala, etc) + Experience with Object-Oriented Programming and Design + Strong Computer Science fundamentals in data structures, algorithms, problem solving, distributed systems, and complexity analysis + Experience with system architecture and design + Deep knowledge in data mining, machine learning, or information retrieval. + Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) + Master's Degree in Computer Science, Math or a related field + Industry experience as a Back-End Software Engineer or related specialty + Experience building highly available, distributed systems for data extraction, ingestion, and processing of large data sets in production + Experience building data products incrementally and integrating and managing datasets from multiple sources + Experience with AWS technologies including Redshift, Aurora, S3, EMR, EML + Experience with unstructured data in NoSQL databases + Knowledge of professional software engineering best practices including coding standards, code reviews, source control management, configuration management, build processes, testing, and operations + Experience with Agile software development in a UNIX/Linux environment + Strong written and spoken communication skills Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. AMZR Req ID: 545376 External Company URL: www.amazon.com
              (USA-WA-Seattle) Software Development Engineer   
    Do you want to build the premium shopping experiences for millions of Amazon customers? Do you want to work on performance challenges for providing the best recommendations less than 200 milliseconds, given millions of customers and millions of products? Are you interested in working on Machine Learning and data science, believing every customer should not have the same experience? Amazon has a role or you. Amazon is looking for an experienced, result-oriented engineer to predict patterns in the interests of customers and the products they love. Our vision is to provide personalized shopping experience for Amazon devices, accessories, and services for all channels, including voice, applying machine learning science, which will drive continuous innovation and change the way people shop on Amazon. You will seek out hidden and valuable correlations between the easily-measurable and the hard-to-quantify, within immense volumes of real-world data. You will develop ML models and train them to solve the personalization challenges. You will formulate quantitative metrics which allow us to track progress and audit solutions with minimal cost and human effort. You will also pioneer development of ML platform and infrastructure with scalability and performance in mind. You will work closely with product managers and UX designers to identify and solve real-world customer problems and business opportunities. You will have the opportunity to interact with senior engineers throughout the company to determine the best practices for architecting, building, testing, and deploying software solutions/components. You will have complete ownership to define new shopping experiences and drive innovation with the latest technologies, including machine learning. We encourage experimentation and pushing innovative technology solutions. You will also have opportunities to build platforms and influence other groups as you define new customer experiences. We are a full stack team, so you will have experience in all aspects of our multi-tiered environment. Software development engineer positions require a depth and breadth of knowledge in design and development, experience with agile methodologies, proficiency in a high-level language, experience building highly scalable, systems involving distributed services and persistent storage. You will own the design of major deliverable and have opportunities to build them from scratch. This is a high visibility and fast-paced environment where you will make a direct impact on the customer experience and the bottom line of the company. + Bachelor's degree in Computer Science or another technical field, or commensurate professional experience. + 4+ years of professional software development experience + Proficiency in at least one modern object-oriented programming language such as Java, C++ or C# + Deep understanding of CS fundamentals including data structures, algorithms and complexity analysis + Experience building large-scale, high-performance systems in a complex, multi-tiered, distributed environment + Design and architecture knowledge as well as familiarity with object oriented analysis and design patterns (OOA/OOD) + Ability to thrive in fast-paced, dynamic environment + Proven track record of taking ownership and successfully delivering results + Experience with service-oriented architecture and web application/services development from scratch + Experience working in a UNIX/Linux environment is preferred + Understanding of performance tradeoffs, load balancing and operational issues + Ability to clearly and concisely communicate with technical and non-technical stakeholders across all levels of the organization AMZR Req ID: 544927 External Company URL: www.amazon.com
              (USA-WA-Seattle) Software Development Engineer   
    The Amazon Payments Issuance team is responsible for developing the platform and applications used to introduce new and innovative payment methods to customers as well as support Amazon’s global CoBrand and Private label credit cards along with the world’s largest rewards catalog, Shop with Points. The technology we build and operate varies widely, ranging from large scale Distributed Engineering incorporating the latest from Machine Learning in the Big Data space to customer and mobile friendly User Experiences. We are an agile team, moving quickly in collaboration with our business to bring new features to millions of Amazon customers while having fun and filing new inventions along the way. If you can think big and want to join a fast moving team breaking new ground at Amazon we would like to speak with you! - Bachelors in Computer Science or related area or equivalent industry experience - Proficient in OO Design/architecture, Algorithms, Data structures and big-O analysis - 3+ years developing high quality, production software in Java or C++ - 3+ years developing on Linux/related platforms - Proficient in web technologies - Masters in Computer Science or related area or equivalent industry experience - Experience working with Spring and relational databases (Oracle and JDBC/Hibernate a plus) - Experience writing code in a high volume, service based architecture - Knowledge of Statistical Analysis in the Big Data space Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. AMZR Req ID: 518719 External Company URL: www.amazon.com
              Futures: Deep learning and health - the hurdles machine learning must leap   
    Startups and Silicon Valley giants are pushing into medicine with artificial intelligence and deep learning.
              Google Photos' AI-powered sharing is now available   
    Google is making good on its promise of AI-assisted photo sharing. A Google Photos upgrade arriving this week uses machine learning...
              Introducing Appetite and updates from 2013   


    Just wanted to update you on my latest experiment. Since my journey at Hover[1], and Taptolearn[2], I have now joined the 3-person team at verbs.im and prototyping some new ideas. I've always liked detecting patterns - be it patterns in names as a kid, context on a webpage or from bigdata. So detecting apps from the image of a home screen in near real-time seemed challenging. Plus I got to work in C++ apart from just the weekends ( and erlang for Verbs) which is a great start to my new year.

    Appetite detects apps on your iPhone's home screen automagically[3], and makes it easy to share by giving you a short url. Here's an example of what we detected from Ashton Kutcher's iPhone http://appetite.io/a/c282709a

    Too bad we couldn't submit it to betali.st - apparently they only list startups never publicly mentioned before. Guess we were 24 hours too late. I'm stoked by the initial response of Appetite on Twitter since posting it on HN yesterday

    Here's what Matthew @Panzer, the editor of thenextweb had to say, when he stumbled on appetite to our delight

    My CES home screen.Here’s a link to the apps (minus Glassboard, detected as a Japanese app):appetite.io/a/fe40b073 twitter.com/panzer/status/…

    — Matthew Panzarino (@panzer) January 5, 2013

    雖然Google說,不能Appeal只是一個Bug,但就算可以Appeal,YouTuber要花的工夫也的確龐大。這明顯影響了「Marginal」的YouTuber的維生能力。這有可能對整個Adsense的社群有影響。

    香港的Media Agency朋友稱,本地的廣告商似乎未被這些東西「驚動」。未有本地的品牌主動(經Media Agency)抽起在YouTube的廣告。不過國際品牌呢,Local Office都會跟總部的指令停了在YouTube的廣告。這對YouTube在本港的廣告收入的確受影響。Google覺得最幸運的是,個別大Brand因為已經有Deal在Google用多少錢,所以當中的廣告費仍留在Google的勢力範圍。不過也有公司因為沒有這樣的Deal去綁起那個廣告費,於是錢就由Google袋口走了去其他地方。我估這情況的最大得益者應該是Facebook。

    廣告與平台、平台與內容本來就有很微妙的關係。本來兩件事應該分開去考慮,但二者又互相影響、令廣告商不得不把二者拉在一起考慮。但如果把二者扣得太緊,很容易就由廣告商來決定了內容創作者的自由度了。

    延伸閱讀:
    歐美廣告商杯葛Google
    Google Updates Ads Policies Again, Ramps Up AI to Curtail YouTube Crisis

    相關文章:
    YouTuber從YouTube賺幾多廣告錢?
    挑戰YouTube,Facebook顯示Video View Count,並加入Re-targeting選項
    YouTube還擊!Google推Cards功能,把(類似)Annotation功能帶上Mobile


              2017 Annual HLS Survey Results   

    As many of you know, Cadence (more correctly, “I”) recently performed an industry survey about HLS (High Level Synthesis) to get a fuller view of the productivity experiences and expectations from users and non-users alike.

    With nearly 200 responses, roughly half from HLS users and half not, we got a representative picture of what HLS users, potential users, and even skeptics believe about HLS. So let’s dive in.

    How familiar are you with high-level synthesis (HLS)?

     This was a good cross-section of high-level synthesis users and non-users, which I was very happy to see. The numbers are high enough that they are likely a decent representation of the industry perceptions.

    In the analysis of the following questions, I break down the responses by people who have used HLS (the mustard and light blue sections of the above graph) vs non-users (the next three categories). I excluded the responses from people who answered “not at all,” since they self-identified as not even hearing about HLS before this survey.

    The next question, only for HLS users, is about what they have designed with HLS.

     

    What types of hardware have you designed with HLS? (select all that apply)

    The first takeaway should be that the old opinion that HLS is only used for datapath types of applications is just that… old. Many years ago, that was true, but not today. “Controllers” or “processors” combine to account for 24% of the design types. Of course, some of the other areas, such as “wired networking,” are likely to include a lot of non-datapath processing, as well.

    Compared to the survey I did in 2015, “Image Processing,” the combined “networking” categories, and “encryption” have all decreased as an overall percentage. To be clear, this reflects the diversely growing user base of HLS, not an absolute decrease in these categories. (As a matter of fact, wireless was the fastest growing market segment for Stratus™ HLS in 2016.)

    The remainder of this year’s survey focuses on productivity, starting with overall productivity compared to an RTL designer. In the following graphs, red bars are the reported users’ experiences, and blue bars are the reported non-users’ expectations.

     

    On average, how productive is an HLS designer compared to an RTL designer?

    Most HLS users (red bars) are seeing a good productivity benefit. The spread in productivity didn’t appear to have any correlation with the types of HW being designed. It’s quite possible it’s related to the learning curve, as productivity tends to increase as familiarity with HLS flow increases. Next year, I’ll be sure to ask, “How long have you been using HLS?”

    It was interesting to see that 5% of HLS users are exceeding the standard HLS claim of “up to 10x better productivity.” Perhaps we should increase the claim…?

    One disappointing result is the shape of the graph for non-users compared to users. As a group, non-users have lower productivity expectations than what is being realized by industry users. In fact, almost a third believe there is no productivity benefit. I guess that gives me and the rest of the HLS community some homework…

    The next question asked about how the productivity gained through behavioral IP reuse.

    How much productivity is gained through behavioral IP reuse? “Behavioral IP” is defined as high-level IP created for implementation with high-level synthesis (HLS). Behavioral IP can typically be reused or retargeted by changing some controls on the HLS tool.

    Again, most HLS users (red bars) are seeing a good productivity benefit from behavioral IP, and 5% are exceeding even the marketing claims.

    Unlike the previous question, the shapes of the graphs of the user experiences and non-user expectations were mostly in line, albeit with a few more at the extreme high and low ends of expectations.

    The final productivity question is about verification.

    How much more productive is verification in the HLS flow?

     

    Once again, most HLS users (red bars) are seeing a good productivity benefit. As before, there didn’t seem to be any correlators with the spread in productivity. Over 6% of users are exceeding the HLS productivity claims. Interestingly, more non-users seem to believe the HLS productivity benefit when it comes to verification.

     

     At this point, you may be going cross-eyed from all the graphs, so let me summarize.

    • The types of applications where HLS is being used has broadened significantly.
    • HLS users are getting, and sometimes exceeding, the productivity benefits that we EDA vendors claim.
    • Non-HLS users accept the productivity benefit when it comes to verification more readily than design.

    I’m sure there are other correlations and data that can be gleaned from the results. Maybe I can get my hands on some of that machine learning IP to sift through the raw data….

     

    I’ll close with one final survey result. This one may be immediately applicable to you, and might even save you some money. The Pursley household recently saw three early summer “blockbuster” movies. Previews suggested each could be the movie of the year, so I did a not-so-anonymous survey to see which movie was the best.

    As you can see, Guardians of the Galaxy Vol. 2 was the clear winner with 75% of the respondents saying it was the best. It also got a very rare “maybe 10 out of 10 stars” from our resident movie critic. Wonder Woman was also a fantastic movie, getting 25% of the votes and a “8 out of 10 stars” only because it started a little slow.

    Pirates of the Caribbean: Dead Men Tell No Tales was a different experience altogether. I think it got negative stars, but I can’t remember because we were almost running to get out of the theater.  Yeah, it was that bad… but your mileage may vary.

    For more information about the sequel Dead Men Don't Do HLS...sorry, my brain is still mush...I mean Stratus, see the product page.


              Software Engineer - Computer Vision/Machine Learning Expert - Uber - Boulder, CO   
    About the TeamUber, Advanced Technologies, Engineering - Imagery is the Louisville, CO division of the Uber Engineering Team:....
    From Uber - Sat, 22 Apr 2017 14:05:27 GMT - View all Boulder, CO jobs
              Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage   
    Hedge funds are testing new quantitative strategies that could supplant traditional fund managers
              Software Development Manager - AFT Entropy Management Tech - AMZN CAN Fulfillment Svcs, Inc - Toronto, ON   
    We operate at a nexus of machine learning, computer vision, robotics, and healthy measure of hard-earned expertise in operations to build automated, algorithmic...
    From Amazon.com - Tue, 27 Jun 2017 14:12:51 GMT - View all Toronto, ON jobs
              Senior AI Solution Architect - Innodata Labs - Remote   
    Master’s or PhD degree, or proven workplace experience in machine learning/artificial intelligence. Are you a born entrepreneur who finds satisfaction in...
    From Innodata Labs - Fri, 05 May 2017 23:23:37 GMT - View all Remote jobs
              Network Engineer - Daimler - Sunnyvale, CA   
    MBRDNA is headquartered in Silicon Valley, California, with key areas of Advanced Interaction Design, Digital User Experience, Machine Learning, Autonomous...
    From Daimler - Thu, 13 Apr 2017 05:42:50 GMT - View all Sunnyvale, CA jobs
              Senior Software Engineer - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Sat, 11 Mar 2017 00:47:45 GMT - View all New York, NY jobs
              Software Dev Engineer -- Ad Platform - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Wed, 08 Mar 2017 06:39:18 GMT - View all New York, NY jobs
              Business Continuity / Disaster Recovery Architect - Neiman Marcus - Dallas, TX   
    Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus....
    From Neiman Marcus - Thu, 25 May 2017 22:30:52 GMT - View all Dallas, TX jobs
              DMG Launches A.I. Brand Safety Tool BrandX   
    <HTML>
    <HEAD>
    <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
    <style type="text/css">
              .xn-newslines .xn-distributor{
              display:none;
              }
              .xn-newslines .xn-dateline{
              display:none;
              }
             
          /* /\/\/\/\/\/\/\/\/ CLIENT-SPECIFIC STYLES /\/\/\/\/\/\/\/\/ */
            #outlook a{padding: 0;} /* Force Outlook to provide a "view in browser" message */
            .ReadMsgBody{width: 100%;} .ExternalClass{width: 100%;} /* Force Hotmail to display emails at full width */
            .ExternalClass, .ExternalClass p, .ExternalClass span, .ExternalClass font, .ExternalClass td, .ExternalClass div {line-height: 100%;} /* Force Hotmail to display normal line spacing */
            body, table, td, p, a, li, blockquote{-webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%;} /* Prevent WebKit and Windows mobile changing default text sizes */
            table, td{mso-table-lspace: 0pt; mso-table-rspace: 0pt;} /* Remove spacing between tables in Outlook 2007 and up */
            img{-ms-interpolation-mode: bicubic;} /* Allow smoother rendering of resized image in Internet Explorer */

            /* /\/\/\/\/\/\/\/\/ RESET STYLES /\/\/\/\/\/\/\/\/ */
            body{margin: 0; padding: 0;}
            img{border: 0; height: auto; line-height: 100%; outline: none; text-decoration: none;}
            table{border-collapse: collapse !important;}
            body, #bodyTable, #bodyCell{height: 100% !important; margin: 0; padding: 0; width: 100% !important;}

            /* /\/\/\/\/\/\/\/\/ TEMPLATE STYLES /\/\/\/\/\/\/\/\/ */

            /* ========== Page Styles ========== */

            #bodyCell{padding: 20px;}
            #templateContainer{
                width: 600px;
                background-color: #ffffff;
            }

                /**
                * @tip Set the background color and top border for your email. You may want to choose colors that match your company's branding.
                * @theme page
                */
                body, #bodyTable{
                    background-color: #DEE0E2;
                }

                /**
                * @tip Set the background color and top border for your email. You may want to choose colors that match your company's branding.
                * @theme page
                */
                #bodyCell{
                    border-top: 4px solid #BBBBBB;
                }

                /**
                * @tip Set the border for your email.
                */
                #templateContainer{
                    border: 1px solid #BBBBBB;
                }

                /**
                * @tip Set the styling for all first-level headings in your emails. These should be the largest of your headings.
                * @style heading 1
                */
                h1{
                    color: #404040 !important;
                    display: block;
                    font-family: Helvetica;
                    font-size: 26px;
                    font-style: normal;
                    font-weight: bold;
                    line-height: 100%;
                    letter-spacing: normal;
                    margin-top: 0;
                    margin-right: 0;
                    margin-bottom: 10px;
                    margin-left: 0;
                    text-align: left;
                }

                /**
                * @tip Set the styling for all second-level headings in your emails.
                * @style heading 2
                */
                h2{
                    color: #404040 !important;
                    display: block;
                    font-family: Helvetica;
                    font-size: 20px;
                    font-style: normal;
                    font-weight: bold;
                    line-height: 100%;
                    letter-spacing: normal;
                    margin-top: 0;
                    margin-right: 0;
                    margin-bottom: 10px;
                    margin-left: 0;
                    text-align: left;
                }

                /**
                * @tip Set the styling for all third-level headings in your emails.
                * @style heading 3
                */
                h3{
                    color: #3d98c6 !important;
                    display: block;
                    font-family: Helvetica;
                    font-size: 16px;
                    font-weight: normal;
                    line-height: 100%;
                    letter-spacing: normal;
                    margin-top: 0;
                    margin-right: 0;
                    margin-bottom: 10px;
                    margin-left: 0;
                    text-align: left;
                }

                /**
                * @tip Set the styling for all fourth-level headings in your emails. These should be the smallest of your headings.
                * @style heading 4
                */
                h4{
                    color: #808080 !important;
                    display: block;
                    font-family: Helvetica;
                    font-size: 14px;
                    font-weight: normal;
                    line-height: 100%;
                    letter-spacing: normal;
                    margin-top: 0;
                    margin-right: 0;
                    margin-bottom: 10px;
                    margin-left: 0;
                    text-align: left;
                }

                h4.subtitle {
                  color: #808080 !important;
                  font-size: 14px;
                }

                /* ========== Header Styles ========== */

                /**
                * @tip Set the background color and bottom border for your email's preheader area.
                * @theme header
                */
                #templatePreheader{
                    background-color: #F4F4F4;
                    border-bottom: 1px solid #CCCCCC;
                }

                /**
                * @tip Set the styling for your email's preheader text. Choose a size and color that is easy to read.
                */
                .preheaderContent{
                    color: #808080;
                    font-family: Helvetica;
                    font-size: 10px;
                    line-height: 125%;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's preheader links. Choose a color that helps them stand out from your text.
                */
                .preheaderContent a:link, .preheaderContent a:visited, /* Yahoo! Mail Override */ .preheaderContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #606060;
                    font-weight: normal;
                    text-decoration: underline;
                }

                /**
                * @tip Set the background color and borders for your email's header area.
                * @theme header
                */
                #templateHeader{
                    background-color: #F4F4F4;
                    border-top: 1px solid #FFFFFF;
                    border-bottom: 1px solid #CCCCCC;
                }

                /**
                * @tip Set the styling for your email's header text. Choose a size and color that is easy to read.
                */
                .headerContent{
                    color: #505050;
                    font-family: Helvetica;
                    font-size: 20px;
                    font-weight: bold;
                    line-height: 100%;
                    padding-top: 0;
                    padding-right: 0;
                    padding-bottom: 0;
                    padding-left: 0;
                    text-align: left;
                    vertical-align: middle;
                }

                /**
                * @tip Set the styling for your email's header links. Choose a color that helps them stand out from your text.
                */
                .headerContent a:link, .headerContent a:visited, /* Yahoo! Mail Override */ .headerContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: normal;
                    text-decoration: underline;
                }

                #headerImage{
                    height: auto;
                    max-width: 600px;
                }

                #socialShare img {
                    margin-right: 3px;
                }

                /* ========== Body Styles ========== */

                /**
                * @tip Set the background color and borders for your email's body area.
                */
                #templateBody{
                    background-color: #FFFFFF;
                    border-top: 1px solid #FFFFFF;
                    /*border-bottom: 1px solid #CCCCCC;*/
                }

                /**
                * @tip Set the styling for your email's main content text. Choose a size and color that is easy to read.
                * @theme main
                */
                .bodyContent{
                    color: #505050;
                    font-family: Helvetica;
                    font-size: 16px;
                    line-height: 150%;
                    padding-top: 20px;
                    padding-right: 20px;
                    padding-bottom: 20px;
                    padding-left: 20px;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's main content links. Choose a color that helps them stand out from your text.
                */
                .bodyContent a:link, .bodyContent a:visited, /* Yahoo! Mail Override */ .bodyContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: normal;
                    text-decoration: underline;
                }

                .bodyContent img{
                    display: inline;
                    height: auto;
                    max-width: 560px;
                }

                /**
                * @tip Set the background color and border for your email's data table.
                */
                .templateDataTable{
                    background-color: #FFFFFF;
                }

                .templateDataTable a:link {
                    text-decoration: none;
                    color: #3d98c6;
                }

                /**
                * @tip Set the styling for your email's data table text. Choose a size and color that is easy to read.
                */
                .dataTableHeading{
                    /*background-color: #E7F1FC;*/
                    color: #336699;
                    font-family: Helvetica;
                    font-size: 14px;
                    font-weight: bold;
                    line-height: 150%;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's data table links. Choose a color that helps them stand out from your text.
                */
                .dataTableHeading a:link, .dataTableHeading a:visited, /* Yahoo! Mail Override */ .dataTableHeading a .yshortcuts /* Yahoo! Mail Override */{
                    color: #FFFFFF;
                    font-weight: bold;
                }

                /**
                * @tip Set the styling for your email's data table text. Choose a size and color that is easy to read.
                */
                .dataTableContent{
                    border-top: 1px solid #DDDDDD;
                    border-bottom: 0;
                    color: #404040;
                    font-family: Helvetica;
                    font-size: 12px;
                    line-height: 150%;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's data table links. Choose a color that helps them stand out from your text.
                */
                .dataTableContent a:link, .dataTableContent a:visited, /* Yahoo! Mail Override */ .dataTableContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: bold;
                }

                /* ========== Column Styles ========== */

                .templateColumnContainer{width: 200px;}

                /**
                * @tip Set the background color and borders for your email's column area.
                */
                #templateColumns{
                    background-color: #F4F4F4;
                    border-top: 1px solid #FFFFFF;
                    border-bottom: 1px solid #CCCCCC;
                }

                /**
                * @tip Set the styling for your email's left column content text. Choose a size and color that is easy to read.
                */
                .leftColumnContent{
                    color: #505050;
                    font-family: Helvetica;
                    font-size: 14px;
                    line-height: 150%;
                    padding-top: 0;
                    padding-right: 20px;
                    padding-bottom: 20px;
                    padding-left: 20px;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's left column content links. Choose a color that helps them stand out from your text.
                */
                .leftColumnContent a:link, .leftColumnContent a:visited, /* Yahoo! Mail Override */ .leftColumnContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: normal;
                    text-decoration: underline;
                }

                /**
                * @tip Set the styling for your email's center column content text. Choose a size and color that is easy to read.
                */
                .centerColumnContent{
                    color: #505050;
                    font-family: Helvetica;
                    font-size: 14px;
                    line-height: 150%;
                    padding-top: 0;
                    padding-right: 20px;
                    padding-bottom: 20px;
                    padding-left: 20px;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's center column content links. Choose a color that helps them stand out from your text.
                */
                .centerColumnContent a:link, .centerColumnContent a:visited, /* Yahoo! Mail Override */ .centerColumnContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: normal;
                    text-decoration: underline;
                }

                /**
                * @tip Set the styling for your email's right column content text. Choose a size and color that is easy to read.
                */
                .rightColumnContent{
                    color: #505050;
                    font-family: Helvetica;
                    font-size: 14px;
                    line-height: 150%;
                    padding-top: 0;
                    padding-right: 20px;
                    padding-bottom: 20px;
                    padding-left: 20px;
                    text-align: left;
                }

                /**
                * @tip Set the styling for your email's right column content links. Choose a color that helps them stand out from your text.
                */
                .rightColumnContent a:link, .rightColumnContent a:visited, /* Yahoo! Mail Override */ .rightColumnContent a .yshortcuts /* Yahoo! Mail Override */{
                    color: #3d98c6;
                    font-weight: normal;
                    text-decoration: underline;
                }

                .leftColumnContent img, .rightColumnContent img{
                    display: inline;
                    height: auto;
                    max-width: 260px;
                }

                /* ========== Footer Styles ========== */

                /**
                * @tip Set the background color and borders for your email's footer area.
                * @theme footer
                */
                #templateFooter{
                    background-color: #F4F4F4;
                    border-top: 1px solid #FFFFFF;
                }

                /**
                * @tip Set the styling for your email's footer text. Choose a size and color that is easy to read.
                * @theme footer
                */
                .footerContent{
                    color: #808080;
                    font-family: Helvetica;
                    font-size: 10px;
                    line-height: 150%;
                    padding-top: 20px;
                    padding-right: 20px;
                    padding-bottom: 20px;
                    padding-left: 20px;
                    text-align: left;
                }

                .footerContent.social, .footerContent.social h4 {
                    text-align: center;
                }

                .footerContent.social a {
                    margin: 0 10px;
                }

                /**
                * @tip Set the styling for your email's footer links. Choose a color that helps them stand out from your text.
                */
                .footerContent a:link, .footerContent a:visited, /* Yahoo! Mail Override */ .footerContent a .yshortcuts, .footerContent a span /* Yahoo! Mail Override */{
                    color: #606060;
                    font-weight: normal;
                    text-decoration: underline;
                }

                /* /\/\/\/\/\/\/\/\/ MOBILE STYLES /\/\/\/\/\/\/\/\/ */

                @media only screen and (max-width: 480px){
                    /* /\/\/\/\/\/\/ CLIENT-SPECIFIC MOBILE STYLES /\/\/\/\/\/\/ */
                    body, table, td, p, a, li, blockquote{-webkit-text-size-adjust: none !important;} /* Prevent Webkit platforms from changing default text sizes */
                    body{width: 100% !important; min-width: 100% !important;} /* Prevent iOS Mail from adding padding to the body */

                    /* /\/\/\/\/\/\/ MOBILE RESET STYLES /\/\/\/\/\/\/ */
                    #bodyCell{padding: 10px !important;}

                    /* /\/\/\/\/\/\/ MOBILE TEMPLATE STYLES /\/\/\/\/\/\/ */

                    /* ======== Page Styles ======== */

                    /**
                    * @tip Make the template fluid for portrait or landscape view adaptability. If a fluid layout doesn't work for you, set the width to 300px instead.
                    */
                    #templateContainer{
                        max-width: 600px !important;
                        width: 100% !important;
                    }

                    /**
                    * @tip Make the first-level headings larger in size for better readability on small screens.
                    */
                    h1{
                        font-size: 24px !important;
                        line-height: 100% !important;
                    }

                    /**
                    * @tip Make the second-level headings larger in size for better readability on small screens.
                    */
                    h2{
                        font-size: 20px !important;
                        line-height: 100% !important;
                    }

                    /**
                    * @tip Make the third-level headings larger in size for better readability on small screens.
                    */
                    h3{
                        font-size: 18px !important;
                        line-height: 100% !important;
                    }

                    /**
                    * @tip Make the fourth-level headings larger in size for better readability on small screens.
                    */
                    h4{
                        font-size: 16px !important;
                        line-height: 100% !important;
                    }

                    /* ======== Header Styles ======== */

                    #templatePreheader{display: none !important;} /* Hide the template preheader to save space */

                    /**
                    * @tip Make the main header image fluid for portrait or landscape view adaptability, and set the image's original width as the max-width. If a fluid setting doesn't work, set the image width to half its original size instead.
                    */
                    #headerImage{
                        height: auto !important;
                        max-width: 600px !important;
                        width: 100% !important;
                    }

                    /**
                    * @tip Make the header content text larger in size for better readability on small screens. We recommend a font size of at least 16px.
                    */
                    .headerContent{
                        font-size: 20px !important;
                        line-height: 125% !important;
                    }

                    /* ======== Body Styles ======== */

                    /**
                    * @tip Make the body content text larger in size for better readability on small screens. We recommend a font size of at least 16px.
                    */
                    .bodyContent{
                        font-size: 18px !important;
                        line-height: 125% !important;
                    }

                    .templateDataTableContainer {
                        background-color: #ffffff;
                    }

                    /**
                    * @tip Set the background color and border for your email's data table.
                    */
                    .templateDataTable{
                        background-color: #FFFFFF;
                        border: 1px solid #DDDDDD;
                    }

                    /**
                    * @tip Set the styling for your email's data table text. Choose a size and color that is easy to read.
                    */
                    .dataTableHeading{
                        background-color: #D8E2EA;
                        color: #336699;
                        font-family: Helvetica;
                        font-size: 14px;
                        font-weight: bold;
                        line-height: 150%;
                        text-align: left;
                    }

                    /**
                    * @tip Set the styling for your email's data table links. Choose a color that helps them stand out from your text.
                    */
                    .dataTableHeading a:link, .dataTableHeading a:visited, /* Yahoo! Mail Override */ .dataTableHeading a .yshortcuts /* Yahoo! Mail Override */{
                        color: #3d98c6;
                        font-weight: bold;
                    }

                    /**
                    * @tip Set the styling for your email's data table text. Choose a size and color that is easy to read.
                    */
                    .dataTableContent{
                        border-top: 1px solid #DDDDDD;
                        border-bottom: 0;
                        color: #202020;
                        font-family: Helvetica;
                        font-size: 12px;
                        font-weight: bold;
                        line-height: 150%;
                        text-align: left;
                    }

                    /**
                    * @tip Set the styling for your email's data table links. Choose a color that helps them stand out from your text.
                    */
                    .dataTableContent a:link, .dataTableContent a:visited, /* Yahoo! Mail Override */ .dataTableContent a .yshortcuts /* Yahoo! Mail Override */{
                        color: #3d98c6;
                        font-weight: bold;
                    }

                    /* ======== Column Styles ======== */

                    .templateColumnContainer{display: block !important; width: 100% !important;}

                    /**
                    * @tip Make the column image fluid for portrait or landscape view adaptability, and set the image's original width as the max-width. If a fluid setting doesn't work, set the image width to half its original size instead.
                    */
                    .columnImage{
                        height: auto !important;
                        max-width: 480px !important;
                        width: 100% !important;
                    }

                    /**
                    * @tip Make the left column content text larger in size for better readability on small screens. We recommend a font size of at least 16px.
                    */
                    .leftColumnContent{
                        font-size: 16px !important;
                        line-height: 125% !important;
                    }

                    /**
                    * @tip Make the center column content text larger in size for better readability on small screens. We recommend a font size of at least 16px.
                    */
                    .centerColumnContent{
                        font-size: 16px !important;
                        line-height: 125% !important;
                    }

                    /**
                    * @tip Make the right column content text larger in size for better readability on small screens. We recommend a font size of at least 16px.
                    */
                    .rightColumnContent{
                        font-size: 16px !important;
                        line-height: 125% !important;
                    }

                    /* ======== Footer Styles ======== */

                    /**
                    * @tip Make the body content text larger in size for better readability on small screens.
                    */
                    .footerContent{
                        font-size: 14px !important;
                        line-height: 115% !important;
                    }

                    .footerContent a {
                        display:block !important;
                    } /* Place footer social and utility links on their own lines, for easier access */

                    .footerContent.social a {
                        display: inline-block !important;
                    }
                }
          </style>
    </HEAD>
    <body leftmargin="0" marginheight="0" marginwidth="0" offset="0" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;margin: 0;padding: 0;background-color: #DEE0E2;height: 100% !important;width: 100% !important;" topmargin="0">
    <center>
    <table align="center" border="0" cellpadding="0" cellspacing="0" height="100%" id="bodyTable" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;margin: 0;padding: 0;background-color: #DEE0E2;border-collapse: collapse !important;height: 100% !important;width: 100% !important;" width="100%">
    <tr>
    <td align="center" id="bodyCell" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;margin: 0;padding: 20px;border-top: 4px solid #BBBBBB;height: 100% !important;width: 100% !important;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateContainer" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;width: 600px;background-color: #ffffff;border: 1px solid #BBBBBB;border-collapse: collapse !important;">
    <tr>
    <td align="center" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateHeader" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;background-color: #F4F4F4;border-top: 1px solid #FFFFFF;border-bottom: 1px solid #CCCCCC;border-collapse: collapse !important;" width="100%">
    <tr>
    <td class="headerContent" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;color: #505050;font-family: Helvetica;font-size: 20px;font-weight: bold;line-height: 100%;padding-top: 0;padding-right: 0;padding-bottom: 0;padding-left: 0;text-align: left;vertical-align: middle;" valign="top"><img id="headerImage" src="http://content.prnewswire.com/designimages/prnj_email_header-1.png" style="max-width: 600px;-ms-interpolation-mode: bicubic;border: 0;height: auto;line-height: 100%;outline: none;text-decoration: none;"></td>
    </tr>
    </table>
    </td>
    </tr>
    <tr>
    <td align="center" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateBody" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;background-color: #FFFFFF;border-top: 1px solid #FFFFFF;border-collapse: collapse !important;" width="100%">
    <tr>
    <td class="bodyContent" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;color: #505050;font-family: Helvetica;font-size: 16px;line-height: 150%;padding-top: 20px;padding-right: 20px;padding-bottom: 20px;padding-left: 20px;text-align: left;" valign="top">
    <h1 style="display: block;font-family: Helvetica;font-size: 26px;font-style: normal;font-weight: bold;line-height: 100%;letter-spacing: normal;margin-top: 0;margin-right: 0;margin-bottom: 10px;margin-left: 0;text-align: left;color: #404040 !important;">Tech Profile</h1>
    <h4 style="display: block;font-family: Helvetica;font-size: 14px;font-weight: normal;line-height: 100%;letter-spacing: normal;margin-top: 0;margin-right: 0;margin-bottom: 10px;margin-left: 0;text-align: left;color: #808080 !important;">Username: <a href="https://prnmedia.prnewswire.com/profile/?action=editProfile" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;color: #3d98c6;font-weight: normal;text-decoration: underline;" target="_blank">aronschatz / edit profile</a>
    </h4>
    </td>
    </tr>
    </table>
    </td>
    </tr>
    <tr>
    <td align="center" class="bodyContent" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateBody" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;border-collapse: collapse !important;" width="100%">
    <tr>

    <style type="text/css">
    /* Style Definitions */
    span.prnews_span
    {
    font-size:8pt;
    font-family:"Arial";
    color:black;
    }
    a.prnews_a
    {
    color:blue;
    }
    li.prnews_li
    {
    font-size:8pt;
    font-family:"Arial";
    color:black;
    }
    p.prnews_p
    {
    font-size:0.62em;
    font-family:"Arial";
    color:black;
    margin:0in;
    }
    </style>

    <div xmlns="http://www.w3.org/1999/xhtml" xmlns:xn="http://www.xmlnews.org/ns/" class="xn-newslines">

    <h1 class="xn-hedline">DMG Launches A.I. Brand Safety Tool BrandX</h1>

    <h2 class="xn-hedline">New SSP Integration Can Prevent Invalid Traffic and Predict Completion Rates</h2>

    <p class="xn-distributor">PR Newswire</p>

    <p class="xn-dateline">RAANANA, Israel, June 28, 2017</p>
    </div>

    <div class="xn-content" xmlns="http://www.w3.org/1999/xhtml" xmlns:xn="http://www.xmlnews.org/ns/">

    <p>RAANANA, <span class="xn-location">Israel</span>, <span class="xn-chron">June 28, 2017</span> /PRNewswire/ --&nbsp;<a href="http://www.dsnrmg.com/" rel="nofollow" target="_blank">DMG</a> DSNR Media Group (<a href="http://www.dsnrmg.com/" rel="nofollow" target="_blank">http://www.dsnrmg.com/</a>), a leading digital advertising company, this week launched BrandX, a new artificial intelligence tool that can prevent fraudulent traffic and predict the completion rates of video ads during programmatic advertising auctions.</p>

    <p>The two most uncertain factors in video programmatic advertising are invalid traffic and brand awareness efficiency. As real-time auctions became commonplace, advertisers endured trial and error to adjust their bidding algorithms based on past rates. With machine learning, BrandX helps mitigate advertisers' uncertainty in milliseconds and adds transparency to real-time bidding. By working to block invalid traffic and predicting completion rates, BrandX offers advertisers the opportunity to raise or lower a bid accordingly. <br>
    <br>BrandX tests the traffic originating from publishers on DMG's SSP and assigns a traffic risk score. The tool filters inventory through DMG's unique quality assurance engine, which results in 3% higher filtering. Using other tools like Forensiq and DMG's proprietary technology, many instances of fraudulent traffic are blocked. DMG's partners get high-quality, premium levels of clean traffic. The new features will be part of the bid request as extensions for the Open RTB protocol.</p>

    <p>
    <span class="xn-person">Tom Barkan</span>, head of product management at DMG, says, "BrandX is helping us innovate the way we help our partners get the best results with our products. Machine learning algorithms in BrandX and the huge amount of data passing through our SSP help us to achieve this goal." </p>

    <p>BrandX was first rolled out to select DMG development partners, such as <span class="xn-location">San Francisco, California</span>-based RLLCLL, a rising star in programmatic audience-based video exchange.</p>

    <p>
    <span class="xn-person">Rolan Reichel</span>, CEO @ RLLCLL, says, "Partnering with DMG's SSP is part of RLLCLL's goal of bringing its advertisers the best inventory quality and keeping their brands safe."</p>

    <p>About DMG:<br>DMG is a leading digital advertising agency, providing both direct and programmatic advertisers and publishers with data-driven solutions and patented technologies. </p>

    <p>
    <b>DMG Resources</b>
    </p>

    <p>Website: <a href="http://www.dsnrmg.com/" rel="nofollow" target="_blank">http://www.dsnrmg.com</a>
    </p>

    <p>Blog: <a href="http://www.dsnrmg.com/blog/" rel="nofollow" target="_blank">http://www.dsnrmg.com/blog/</a>
    </p>

    <p>Twitter: <a href="https://twitter.com/DMG_interact" rel="nofollow" target="_blank">https://twitter.com/DMG_interact</a>
    </p>

    <p>
    <b>Contact Information:</b>
    </p>

    <p>
    <span class="xn-person">Gil Wilder Tekuzener</span>
    <br>Marketing Executive, DMG<br>+972-73-200-2495 <br>
    <a href="mailto:gilwil@dsnrmg.com" rel="nofollow" target="_blank">gilwil@dsnrmg.com</a>
    </p>

    <p>&nbsp;</p>

    <p>SOURCE  DMG DSNR Media Group</p>

    </div>

    <img alt="" src="https://rt.prnewswire.com/rt.gif?NewsItemId=LN28543&Transmission_Id=201706280900PR_NEWS_USPR_____LN28543&DateId=20170628" style="border:0px; width:1px; height:1px;" xmlns="http://www.w3.org/1999/xhtml" xmlns:xn="http://www.xmlnews.org/ns/">

    <hr>
    <img alt="" src="https://rt.prnewswire.com/et.gif?newsItemId=LN28543&Transmission_Id=201706280900PR_NEWS_USPR_____LN28543&DateId=20170628&user=1224830" style="border:0px; width:1px; height:1px;"></tr>
    </table>
    </td>
    </tr>
    <tr>
    <td align="center" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateFooter" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;background-color: #F4F4F4;border-top: 1px solid #FFFFFF;border-collapse: collapse !important;" width="100%">
    <tr>
    <td class="footerContent social" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;color: #808080;font-family: Helvetica;font-size: 10px;line-height: 150%;padding-top: 20px;padding-right: 20px;padding-bottom: 20px;padding-left: 20px;text-align: center;" valign="top">
    <h4 style="display: block;font-family: Helvetica;font-size: 14px;font-weight: normal;line-height: 100%;letter-spacing: normal;margin-top: 0;margin-right: 0;margin-bottom: 10px;margin-left: 0;text-align: center;color: #808080 !important;">Follow Us</h4>
    <a href="https://www.facebook.com/pages/PR-Newswire-for-Journalists/99662330903" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;margin: 0 10px;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank"><img alt="Facebook" height="32" src="http://content.prnewswire.com/images/FB-f-Logo__blue_32.png" style="-ms-interpolation-mode: bicubic;border: 0;height: auto;line-height: 100%;outline: none;text-decoration: none;" width="32"></a><a href="https://twitter.com/beyondbylines" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;margin: 0 10px;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank"><img alt="Twitter" height="32" src="http://content.prnewswire.com/images/Twitter_logo_blue.png" style="-ms-interpolation-mode: bicubic;border: 0;height: auto;line-height: 100%;outline: none;text-decoration: none;" width="32"></a><a href="https://www.linkedin.com/company/pr-newswire" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;margin: 0 10px;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank"><img alt="LinkedIn" height="32" src="http://content.prnewswire.com/images/In-2C-32px.png" style="-ms-interpolation-mode: bicubic;border: 0;height: auto;line-height: 100%;outline: none;text-decoration: none;" width="32"></a><a href="https://plus.google.com/+prnewswire/posts" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;margin: 0 10px;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank"><img alt="Google+" height="32" src="http://content.prnewswire.com/images/Red-signin_Short_base_32dp.png" style="-ms-interpolation-mode: bicubic;border: 0;height: auto;line-height: 100%;outline: none;text-decoration: none;" width="32"></a></td>
    </tr>
    </table>
    </td>
    </tr>
    <tr>
    <td align="center" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;" valign="top">
    <table border="0" cellpadding="0" cellspacing="0" id="templateFooter" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;background-color: #F4F4F4;border-top: 1px solid #FFFFFF;border-collapse: collapse !important;" width="100%">
    <tr>
    <td class="footerContent" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;mso-table-lspace: 0pt;mso-table-rspace: 0pt;color: #808080;font-family: Helvetica;font-size: 10px;line-height: 150%;padding-top: 20px;padding-right: 20px;padding-bottom: 20px;padding-left: 20px;text-align: left;" valign="top">
                                                    To change the settings for your profile(s) or email delivery, go to <a href="https://prnmedia.prnewswire.com/profile/?action=editProfile" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank">https://prnmedia.prnewswire.com/profile/?action=editProfile</a> and select the profile you would like to edit. You can select the industries, subjects, languages, geographical areas, companies, delivery options and delivery frequencies of your choice.
                                                    <br>
    <br>
                                                    In addition to current press releases, you can also find archived news, corporate information, photos, tradeshow news and much more on the PR Newswire for Journalists website: <a href="http://prnmedia.prnewswire.com" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;color: #606060;font-weight: normal;text-decoration: underline;" target="_blank">https://prnmedia.prnewswire.com</a>
    <br>
    <br>
                                                    To contact us, email: <a href="mailto:mediasite@prnewswire.com" style="-webkit-text-size-adjust: 100%;-ms-text-size-adjust: 100%;color: #606060;font-weight: normal;text-decoration: underline;">mediasite@prnewswire.com</a>
    <br>
    <br>
                                                    Please do no
              13721: Award-Winning Accountants.   

    Adweek reported new Publicis Groupe CEO Arthur Sadoun is banning his White advertising agencies from participating in award shows and trade shows for 2018 as part of a cost-cutting move. Guess Sadoun has to make up for all of his predecessor’s compulsive digital spending and infamous failed merger deal. One thing is certain: The bold action will clearly show that holding companies like Publicis Groupe spend far more on pursuing trophies than promoting diversity.

    Publicis Groupe Forbids All of Its Agencies From Participating in Awards Shows in 2018

    New CEO Arthur Sadoun makes first mark with decision to save costs

    By Patrick Coffee

    Publicis Groupe will be sitting out the 2018 Cannes Lions festival. The reason? To save money.

    New chief executive officer Arthur Sadoun made his first dramatic mark on the holding company this week by forbidding all of its agencies around the world from participating in awards shows, trade shows or other paid promotional efforts for more than a year.

    According to an internal memo written by CEO Frank Voris of Publicis Groupe’s financial services unit, Re:Sources, Sadoun’s company is “looking for 2.5 percent cost synergies for 2018” and hopes to achieve those savings, at least in part, by “eliminating all award/trade shows for the next year.”

    The memo notes that Re:Sources “will not participate in any vendor conferences, industry trade shows and/or award shows effective July 1.”

    “This is mandatory and exceptions will not be approved. … Award/trade show ban is effective for the entire Groupe, not just Re:Sources,” the memo states.

    The news comes on the same day Sadoun announced the launch of Marcel, a platform designed to serve more than 80,000 employees in 30 different countries and described as “the first-ever professional assistant that uses AI and machine learning technology.”

    The announcement is in keeping with an earlier video in which Sadoun said he wants Publicis Groupe to function as “a platform” rather than a network as part of its larger “Power of One” strategy. In some ways, the Marcel presented in the video above resembles Source, a “gamified” global operating system and collaboration tool developed by Omnicom media agency PHD in 2012.

    When speaking to Adweek about Marcel, Sadoun did not directly address whether Publicis Groupe will be sitting out next year’s Cannes festival. He did, however, note that Marcel will debut during the 2018 VivaTech conference in Paris, which directly precedes Cannes. He also stated that Publicis Groupe would not be using any of its budget for self-promotional purposes during the development of Marcel.

    According to the Voris memo, Sadoun made these announcements during his first “management session,” which occurred in Paris over the weekend.

    A Publicis Groupe spokesperson declined to elaborate on the news beyond Sadoun’s statements and denied the plans have anything to do with “cost synergies.” Re:Sources representatives have not yet responded to requests for comment.

    A Cannes Lions press contact has also not responded to a query regarding Publicis Groupe’s apparent decision to sit out the 2018 festival.

    Earlier this year, the Re:Sources organization went through a round of layoffs attributed to “automating some of its financial operations in order to deliver globally standardized financial [and] accounting services.”

              GSA calls for blockchain and machine learning to speed acquisition   

    The General Services Administration is looking to speed up acquisition by harnessing innovative machine learning and blockchain technologies. The administration released a request for quotes June 19 to improve its Multiple Award Schedules FASt Lane program. FASt Lane was implemented in 2016 to give government agencies timely access to new technology innovation by shortening processing times. […]

    The post GSA calls for blockchain and machine learning to speed acquisition appeared first on Fedscoop.


              Microsoft Dynamics 365/CRM July 2017 Update   

    There is an update forthcoming to the Dynamics 365/CRM platform in July - Microsoft Dynamics 365/CRM July 2017 Update . So what’s new?

    Unified User Interface – Microsoft is releasing user interface improvements designed to unify and improve the interface. Improvements include a new activity timeline for mobile with inline actions and new custom controls and expanded offline mobile capabilities, including for the micro apps that administrators can build for specific roles. Additional updates include new theming options, improved mobile layout support, and an updated look and feel that further updates the user interface to look more current.

    User Interface Update

    Virtual Entities – Virtual Entities, will allow integrated data residing in outside systems to be integrated into the interface via web services. In other “platform” news, Microsoft Flow will be exposed within Dynamics 365 so flows can be created and executed without switching applications.

    Customer Insights experience – will bring together BI, enterprise data, analytics, BI, machine learning and more,  and will offer capabilities like high volume segmentation, predictive lead scoring, and machine learning extensions.

    Microsoft appears to remain committed to rolling out new capabilities in areas like bots, integrated services, and field service-related IoT, details to follow.

    For more info and screenshots of the new features being released head over to our blog

    Are you missing the boat on all this innovation because you an in an on-premise model and on an older version? We can help plan and execute your upgrade to either a current version of Dynamics 365 on premise or migrate you to Dynamics 365 online, for more info give us a call at 844.8.STRAVA (844.878.7282) or email us info@stravatechgroup.com.

    The post Microsoft Dynamics 365/CRM July 2017 Update appeared first on CRM Software Blog | Dynamics 365.


              Emails have feelings too!   

    Emails have feelings and so do cases, web form submissions, survey responses, and any other text based field in your Dynamics 365 (CRM) system!

     

    You receive communications from your customers on a daily basis from a variety of sources, such as email, web submission forms, support cases, etc. Imagine if you could have instant visibility to how your customers are FEELING based on the text they send you without reading it. Angry customers can be addressed immediately via a phone call from a real person. Happy customers can be sent an automatic email and receive a personal follow up within a day or two.

     

    What do you need to do this? D365, a subscription to Azure, and these instructions. Missing any of these? Don't worry, we can help!

     

    There are 3 simple elements to setting up a cognitive service in Azure to gauge sentiment on your D365 records:

    • Add a custom field to the entity in D365
    • Setup a Cognitive Service in Azure (one-time setup)
    • Create a Logic App in Azure to make the magic happen

     

    Using the backend machine learning engineering written for Cortana's Intelligence, you can setup a simple logic app in Azure to scan text from your CRM records and return a score of 0.00 - 1.00 based on your customer's sentiment. That's right, you can find out how your customers FEEL based on the text they send you.

     

    "What do I do?"

    First, decide which entity you would like to detect sentiment: Email Messages? Cases? Leads? A custom entity?

    In D365, create and add a new field to a form (for example, your Email form) called Sentiment Score.

     

    You can come back later to add the field to views and create a rollup field on a related entity for higher-level analysis.

     

    Next, you need to create the service in Azure. This is a one-time setup. You will be able to reuse this service for multiple apps.

    In Azure, create a new Text Analytics API service called 'CognitiveService-TextAnalytics'. Be sure to select the appropriate Pricing tier* based on your usage. You can change this in the future, if necessary.

     

    In order to set up the Logic App in the next steps, you will need the exact Name and the Access Key for this service. From your start page in Azure, navigate to the service you just created and click on it to view it and click on 'Show access keys…' to copy the key.

     

    Side note: this is where you will be able to see all of the analytics for the service once records start flowing through your logic app.

     

    Finally, you will create the Logic App to connect D365 and Azure.

    In Azure, create a new Logic App under Web + Mobile and give your app a name that includes the entity name in it as well, such as SenitmentAnalysisEmail. Once the app deployment is complete, access your app to begin the design.

     

    In the Designer window, select the 'Blank Logic App' from the Templates area. You will be adding 3 steps to your app - 1 trigger step and 2 action steps.

     

    Step 1: Trigger: Dynamics 365 - When a record is created

    Connect to your D365 org using login credentials for a system admin account with a password that does not expire and select the appropriate entity from the list. Set the desired Frequency and Interval (for a constant check, select Seconds and enter 1).

     

    Step 2: Action: Text Analytics - Detect Sentiment

    Key in the exact Name of the service you created and paste the key that you copied. Click in the Text field to select the text field from the entity that you would like to analyze, such as Description.

     

    Step 3: Action: Dynamics 365 - Update a record

    Connect to your D365 org and select the appropriate entity. Click in the Record Identifier field to select the field that represents the unique id of the entity record, such as Email Message for the email entity.

     

    Click on the link to 'Show advanced options' - this shows all of the fields on the selected entity that you can update. Find the field that you created in D365 in Step 1 and select the Score field from the right.

     

    Click Save in the command bar to save the Logic App.

     

    That's it! Now you can test it by creating a new record in D365.

     

    Don't forget that you can view the analytics in Azure by going to the Service and/or to the Logic App. This will help you keep a watch on how many calls (record updates) occur during a given time and will allow you to view errors, if any occur, on the updates.

     

    "There were so many steps! What does my Logic App actually do?"

    You logic app is triggered when a new email (or your selected entity) record is created in your D365 organization. Once the new record is created, using the machine intelligence from the Detect Sentiment service, the Description field will be analyzed and the assigned score from 0.00 to 1.00 will be sent back to your D365 system and written to the custom field that you created.

     

    Good news! You've created the Azure Service, which you only have to do once. Going forward you can use the same service on as many apps as you would like. You can also add Conditions to your Logic App that can branch off for multiple scenarios. You can even write additional code to parse out your text to return an even better sentiment score.

     

    "Now what?"

    This is only the beginning! Now that you have this score, the next steps for your organization are endless. Use this field to trigger workflows to send a variety of email responses or update other fields. Add score based icons to your views. Perform analysis and rollup on a customer, contact, product, campaign, (and so on) basis. The bottom-line is that you will be providing the BEST customer service that you can offer by being able to immediately gauge your customer's sentiment.

     

    Beringer Technology Group, a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365 and CRM for Distribution. We also provide expert Managed IT Services, Backup and Disaster Recovery, Cloud Based Computing and Unified Communication Systems.

     

    *Pricing tiers are as follows. Be advised that a "call" is a single record updated in D365.

    F0 (5k Calls per 30 days) (may or may not be available)

    S1 (100K Calls per 30 days)

    S2 (500K Calls per 30 days)

    S3 (2.5M Calls per 30 days)

    S4 (10M Calls per 30 days)

    The post Emails have feelings too! appeared first on CRM Software Blog | Dynamics 365.


              On the Cruelty of Really Writing a History of Machine Learning   
    The construction, maintenance, and mobilization of data used to both constrain and enable machine learning systems poses profound historiographical questions and offers an intellectual opportunity to engage in fundamental questions about novelty in historical narratives. To effectively explore the intellectual, material, and disciplinary contingencies surrounding both the curation and subsequent distribution of datasets, we need to take seriously the field of machine learning as a worthy subject for historical investigation.
              Learning Theory Analysis for Association Rules and Sequential Event Prediction   
    We present a theoretical analysis for prediction algorithms based on association rules. As part of this analysis, we introduce a problem for which rules are particularly natural, called “sequential event prediction." In sequential event prediction, events in a sequence are revealed one by one, and the goal is to determine which event will next be revealed. The training set is a collection of past sequences of events. An example application is to predict which item will next be placed into a customer's online shopping cart, given his/her past purchases. In the context of this problem, algorithms based on association rules have distinct advantages over classical statistical and machine learning methods: they look at correlations based on subsets of co-occurring past events (items a and b imply item c), they can be applied to the sequential event prediction problem in a natural way, they can potentially handle the “cold start" problem where the training set is small, and they yield interpretable predictions. In this work, we present two algorithms that incorporate association rules. These algorithms can be used both for sequential event prediction and for supervised classification, and they are simple enough that they can possibly be understood by users, customers, patients, managers, etc. We provide generalization guarantees on these algorithms based on algorithmic stability analysis from statistical learning theory. We include a discussion of the strict minimum support threshold often used in association rule mining, and introduce an “adjusted confidence" measure that provides a weaker minimum support condition that has advantages over the strict minimum support. The paper brings together ideas from statistical learning theory, association rule mining and Bayesian analysis.
              Bioinformatics Specialist-Metagenomics/Proteomics - Signature Science, LLC - Austin, TX   
    Travel to project and business development meetings as needed. Familiarity with machine learning, Git, and agile software development is a plus;... $90,000 a year
    From Signature Science, LLC - Tue, 06 Jun 2017 09:05:50 GMT - View all Austin, TX jobs
              Professional, Geospatial Operations - CoreLogic - Boulder, CO   
    Experience with machine learning tools, raster analysis, and feature extraction techniques is preferred. Working together, and differentiated by our superior...
    From CoreLogic - Wed, 12 Apr 2017 00:51:10 GMT - View all Boulder, CO jobs
              Adjunct / Contract Faculty: Data Analytics - Harrisburg University of Science & Technology - Harrisburg, PA   
    Candidates in machine learning should be familiar with the full spectrum of machine learning topics and specialization in one or more topic areas....
    From Indeed - Wed, 21 Jun 2017 18:50:19 GMT - View all Harrisburg, PA jobs
              SOFTWARE ENGINEER II - Microsoft - Redmond, WA   
    Product engineering experience with OS components Business Analyst or Machine Learning experience. Foundational promise to be the most secure collection of...
    From Microsoft - Sat, 25 Mar 2017 02:47:33 GMT - View all Redmond, WA jobs
              Data Scientist - Wink - New York, NY   
    Hands-on experience with supervised and unsupervised machine learning algorithms for regression, classification, and clustering....
    From Wink - Thu, 18 May 2017 06:17:27 GMT - View all New York, NY jobs
              How Artificial Intelligence Will Change Medical Imaging   
    AI, deep learning, artificial intelligence, medical imaging, cardiology, echo AI, clinical decision support, echocardiography

    An example of artificial intelligence from the start-up company Viz. The image shows how the AI software automatically reviews an echocardiogram, completes an automated left ventricular ejection fraction quantification and then presents the data side by side with the original cardiology report. The goal of the software is to augment clinicians and cardiologists by helping them speed workflow, act as a second set of eyes and aid clinical decision support.

    An example of how Agfa is integrating IBM Watson into its radiology workflow. Watson reviewed the X-ray images and the image order and determined the patient had lung cancer and a cardiac history and pulled in the relevant prior exams, sections of the patient history, cardiology and oncology department information. It also pulled in recent lab values, current drugs being taken. This allows for a more complete view of the patient's condition and may aid in diagnosis or determining the next step in care.  

    Artificial intelligence (AI) has captured the imagination and attention of doctors over the past couple years as several companies and large research hospitals work to perfect these systems for clinical use. The first concrete examples of how AI (also called deep learning, machine learning or artificial neural networks) will help clinicians are now being commercialized. These systems may offer a paradigm shift in how clinicians work in an effort to significantly boost workflow efficiency, while at the same time improving care and patient throughput. 

    Today, one of the biggest problems facing physicians and clinicians in general is the overload of too much patient information to sift through. This rapid accumulation of electronic data is thanks to the advent of electronic medical records (EMRs) and the capture of all sorts of data about a patient that was not previously recorded, or at least not easily data mined. This includes imaging data, exam and procedure reports, lab values, pathology reports, waveforms, data automatically downloaded from implantable electrophysiology devices, data transferred from the imaging and diagnostics systems themselves, as well as the information entered in the EMR, admission, discharge and transfer (ADT), hospital information system (HIS) and billing software. In the next couple years there will be a further data explosion with the use of bidirectional patient portals, where patients can upload their own data and images to their EMRs. This will include images shot with their phones of things like wound site healing to reduce the need for in-person follow-up office visits. It also will include medication compliance tracking, blood pressure and weight logs, blood sugar, anticoagulant INR and other home monitoring test results, and activity tracking from apps, wearables and the evolving Internet of things (IoT) to aid in keeping patients healthy.

    Physicians liken all this data to drinking from a firehose because it is overwhelming. Many say it is very difficult or impossible to go through the large volumes of data to pick out what is clinically relevant or actionable. It is easy for things to fall through the cracks or for things to be lost to patient follow-up. This issue is further compounded when you add factors like increasing patient volumes, lower reimbursements, bundled payments and the conversion from fee-for-service to a fee-for-value reimbursement system. 

    This is where artificial intelligence will play a key role in the next couple years. AI will not be diagnosing patients and replacing doctors — it will be augmenting their ability to find the key, relevant data they need to care for a patient and present it in a concise, easily digestible format. When a radiologist calls up a chest computed tomography (CT) scan to read, the AI will review the image and identify potential findings immediately — from the image and also by combing through the patient history  related to the particular anatomy scanned. If the exam order is for chest pain, the AI system will call up:

    • All the relevant data and prior exams specific to prior cardiac history;
    • Pharmacy information regarding drugs specific to COPD, heart failure, coronary disease and anticoagulants;
    • Prior imaging exams from any modality of the chest that may aid in diagnosis;
    • Prior reports for that imaging;
    • Prior thoracic or cardiac procedures;
    • Recent lab results; and
    • Any pathology reports that relate to specimens collected from the thorax.

    Patient history from prior reports or the EMR that may be relevant to potential causes of chest pain will also be collected by the AI and displayed in brief with links to the full information (such as history of aortic aneurism, high blood pressure, coronary blockages, history of smoking, prior pulmonary embolism, cancer, implantable devices or deep vein thrombosis). This information would otherwise would take too long to collect, or its existence might not be known, by the physician so they would not have spent time looking for it.   

    Watch the VIDEO “Examples of Artificial Intelligence in Medical Imaging Diagnostics.” This shows an example of how AI can assess aortic dissection CT images.
     

    Watch the VIDEO “Development of Artificial Intelligence to Aid Radiology,” an interview with Mark Michalski, M.D., director of the Center for Clinical Data Science at Massachusetts General Hospital, explaining the basis of artificial intelligence in radiology.

    At the 2017 Health Information and Management Systems Society (HIMSS) annual conference in February, several vendors showed some of the first concrete examples of how this type of AI works. IBM/Merge, Philips, Agfa and Siemens have already started integrating AI into their medical imaging software systems. GE showed predictive analytics software using elements of AI for the impact on imaging departments when someone calls in sick, or if patient volumes increase. Vital showed a similar work-in-progress predictive analytics software for imaging equipment utilization. Others, including several analytics companies and startups, showed software that uses AI to quickly sift through massive amounts of big data or offer immediate clinical decision support for appropriate use criteria, the best test or imaging to make a diagnosis or even offer differential diagnoses.  

    Philips uses AI as a component of its new Illumeo software with adaptive intelligence, which automatically pulls in related prior exams for radiology. The user can click on an area of the anatomy in a specific MPI view, and AI will find and open prior imaging studies to show the same anatomy, slice and orientation. For oncology imaging, with a couple clicks on the tumor in the image, the AI will perform an automated quantification and then perform the same measures on the priors, presenting a side-by-side comparison of the tumor assessment. This can significantly reduce the time involved with tumor tracking assessment and speed workflow.  

    Read the blog about AI at HIMSS 2017 "Two Technologies That Offer a Paradigm Shift in Medicine at HIMSS 2017."

     

    AI is Elementary to Watson

    IBM Watson has been cited for the past few years as being in the forefront of medical AI, but has yet to commercialize the technology. Some of the first versions of work-in-progress software were shown at HIMSS by partner vendors Agfa and Siemens. Agfa showed an impressive example of how the technology works. A digital radiography (DR) chest X-ray exam was called up and Watson reviewed the image and determined the patient had small-cell lung cancer and evidence of both lung and heart surgery. Watson then searched the picture archiving and communication system (PACS), EMR and departmental reporting systems to bring in:

    • Prior chest imaging studies;
    • Cardiology report information;
    • Medications the patient is currently taking;
    • Patient history relevant to them having COPD and a history of smoking that might relate to their current exam;
    • Recent lab reports;
    • Oncology patient encounters including chemotherapy; and
    • Radiation therapy treatments.

    When the radiologist opens the study, all this information is presented in a concise format and greatly enhances the picture of this patient’s health. Agfa said the goal is to improve the radiologist’s understanding of the patient to improve the diagnosis, therapies and resulting patient outcomes without adding more burden on the clinician. 

    IBM purchased Merge Healthcare in 2015 for $1 billion, partly to get an established foothold in the medical IT market. However, the purchase also gave Watson millions of radiology studies and a vast amount of existing medical record data to help train the AI in evaluating patient data and get better at reading imaging exams. IBM Watson is now licensing its software through third-party agreements with other health IT vendors. The contracts stipulate that each vendor needs to add additional value to Watson with their own programming, not just become a reseller. Probably the most important stipulation of these new contracts is that vendors also are required to share access to all the patient data and imaging studies they have access to. This allows Watson to continue to hone its clinical intelligence with millions of new patient records.  
     

    The Basics of Machine Learning

    Access to vast quantities of patient data and images is needed to feed the AI software algorithms educational materials to learn from. Sorting through massive amounts of big data is a major component of how AI learns what is important for clinicians, what data elements are related to various disease states and gains clinical understanding. It is a similar process to medical students learning the ropes, but uses much more educational input than what is comprehensible by humans. The first step in machine learning software is for it to ingest medical textbooks and care guidelines and then review examples of clinical cases. Unlike human students, the number of cases AI uses to learn numbers in the millions. 

    For cases where the AI did not accurately determine the disease state or found incorrect or irrelevant data, software programers go back and refine the AI algorithm iteration after iteration until the AI software gets it right in the majority of cases. In medicine, there are so many variables it is difficult to always arrive at the correct diagnosis for people or machines. However, percentage wise, experts now say AI software reading medical imaging studies can often match, or in some cases, outperform human radiologists. This is especially true for rare diseases or presentations, where a radiologist might only see a handful of such cases during their entire career. AI has the advantage of reviewing hundreds or even thousands of these rare studies from archives to become proficient at reading them and identify a proper diagnosis. Also, unlike the human mind, it always remains fresh in the computer’s mind. 

    AI algorithms read medical images similar to radiologists, by identifying patterns. AI systems are trained using vast numbers of exams to determine what normal anatomy looks like on scans from CT, magnetic resonance imaging (MRI), ultrasound or nuclear imaging. Then abnormal cases are used to train the eye of the AI system to identify anomalies, similar to computer-aided detection software (CAD). However, unlike CAD, which just highlights areas a radiologist may want to take a closer look at, AI software has a more analytical cognitive ability based on much more clinical data and reading experience that previous generations of CAD software. For this reason, experts who are helping develop AI for medicine often refer to the cognitive ability as “CAD that works.”

       

    AI All Around Us and the Next Step in Radiology

    Deep learning computers are already driving cars, monitoring financial data for theft, able to translate languages and recognize people’s moods based on facial recognition, said Keith Dreyer, DO, Ph.D., vice chairman of radiology computing and information sciences at Massachusetts General Hospital, Boston. He was among the key speakers at the opening session of the 2016 Radiological Society of North America (RSNA) meeting in November, where he discussed AI’s entry into medical imaging. He is also in charge of his institution’s development of its own AI system to assist physicians at Mass General. 

    “The data science revolution started about five years ago with the advent of IBM Watson and Google Brain,” Dreyer explained. He said the 2012 introduction of deep learning algorithms really pushed AI forward and by 2014 the scales began to tip in terms of machines reading radiology studies correctly, reaching around 95 percent accuracy.

    Dreyer said AI software for imaging is not new, as most people already use it on Facebook to automatically tag friends the platform identities using facial recognition algorithms. He said training AI is a similar concept, where you can start with showing a computer photos of cats and dogs and it can be trained to determine the difference after enough images are used. 

    AI requires big data, massive computing power, powerful algorithms, broad investments and then a lot of translation and integration from a programming standpoint before it can be commercialized, Dreyer said. 

    From a radiology standpoint, he said there are two types of AI. The first type that is already starting to see U.S. Food and Drug Administration approval is for quantification AI, which only requires a 510(k) approval. AI developed for clinical interpretation will require FDA pre-market approval (PMA), which involves clinical trials.

    Before machines start conducting primary or peer review reads, Dreyer said it is much more likely AI will be used to read old exams retrospectively to help hospitals find new patients for conditions the patient may not realize they have. He said about 9 million Americans qualify for low-dose CT scans to screen them for lung cancer. He said AI can be trained to search through all the prior chest CT exams on record in the health system to help identify patients that may have lung cancer. This type of retrospective screening may apply to other disease states as well, especially if the AI can pull in genomic testing results to narrow the review to patients who are predisposed to some diseases. 

    He said overall, AI offers a major opportunity to enhance and augment radiology reading, not to replace radiologists. 

    “We are focused on talking into a microphone and we are ignoring all this other data that is out there in the patient record,” Dreyer said. “We need to look at the imaging as just another source of data for the patient.” He said AI can help automate qualification and quickly pull out related patient data from the EMR that will aid diagnosis or the understanding of a patient’s condition.  

    Watch a VIDEO interview with Eliot L. Siegel, M.D., Dwyer Lecturer; Closing Keynote Speaker, Vice Chair of Radiology at the University of Maryland and the Chief of Radiology for VA Maryland Healthcare System, talks about the current state of the industry in computer-aided detection and diagnosis at SIIM 2016. 

    Read the blog “How Intelligent Machines Could Make a Difference in Radiology.”


              Deep Learning in Medical Imaging to Create $300 Million Market by 2021   
    deep learning, medical imaging, Signify Research market report, 2021
    Signifiy Research, deep learning, medical imaging, product hierarchy, image analysis
    Signify Research, world market, medical image analysis, deep learning, artificial intelligence

    February 15, 2017 — Deep learning, also known as artificial intelligence, will increasingly be used in the interpretation of medical images to address many long-standing industry challenges. This will lead to a $300 million market by 2021, according to a new report by Signify Research, an independent supplier of market intelligence and consultancy to the global healthcare information technology industry.

    In most countries, there are not enough radiologists to meet the ever-increasing demand for medical imaging. Consequently, many radiologists are working at full capacity. The situation will likely get worse, as imaging volumes are increasing at a faster rate than new radiologists entering the field. Even when radiology departments are well-resourced, radiologists are under increasing pressure due to declining reimbursement rates and the transition from volume-based to value-based care delivery. Moreover, the manual interpretation of medical images by radiologists is subjective, often based on a combination of experience and intuition, which can lead to clinical errors.

    A new breed of image analysis software that uses advanced machine learning methods, e.g. deep learning, is tackling these problems by taking on many of the repetitive and time-consuming tasks performed by radiologists. There is a growing array of “intelligent” image analysis products that automate various stages of the imaging diagnosis workflow. In cancer screening, computer-aided detection can alert radiologists to suspicious lesions. In the follow-up diagnosis, quantitative imaging tools provide automated measurements of anatomical features. At the top-end of the scale of diagnostic support, computer-aided diagnosis provides probability-driven, differential diagnosis options for physicians to consider as they formulate their diagnostic and treatment decisions.

    “Radiology is evolving from a largely descriptive field to a more quantitative discipline. Intelligent software tools that combine quantitative imaging and clinical workflow features will not only enhance radiologist productivity, but also improve diagnostic accuracy,” said Simon Harris, principal analyst at Signify Research and author of the report.

    However, it is early days for deep learning in medical imaging. There are only a handful of commercial products and it is uncertain how well deep learning will cope with variations in patient demographics, imaging protocols, image artifacts, etc. Many radiologists were left underwhelmed by early-generation computer-aided detection, which used traditional machine learning and relied heavily on feature engineering. They remain skeptical of machine learning’s abilities, despite the leap in performance of today’s deep learning solutions, which automatically learn about image features from radiologist-annotated images and a "ground-truth”. Furthermore, the “black box” nature of deep learning and the lack of traceability as to how results are obtained could lead to legal implications. While none of these problems are insurmountable, healthcare providers are likely to take a ‘wait and see’ approach before investing in deep learning-based solutions.

    “Deep learning is a truly transformative technology and the longer-term impact on the radiology market should not be underestimated. It’s more a question of when, not if, machine learning will be routinely used in imaging diagnosis”, Harris concluded.

    “Machine Learning in Medical Imaging – 2017 Edition” provides a data-centric and global outlook on the current and projected uptake of machine learning in medical imaging. The report blends primary data collected from in-depth interviews with healthcare professionals and technology vendors, to provide a balanced and objective view of the market.

    For more information: www.signifyresearch.net


              Python vs Julia - an example from machine learning   

    In Speeding up isotonic regression in scikit-learn, we dropped down into Cython to improve the performance of a regression algorithm. I thought it would be interesting to compare the performance of this (optimized) code in Python against the naive Julia implementation.

    This article continues on from the previous one, so it may be worth reading that before continuing here to obtain the necessary background information.

    We'll implement both of the algorithms for the previous article, and compare their performance in Julia against Python.

    Linear PAVA

    The Cython code is available on GitHub at scikit-learn, and the Julia code is available on GitHub at Isotonic.jl

    The Julia implementation is straightforward implementation of PAVA, without any bells and whistles. The @inbounds macro was used to compare fairly with the Cython implementation, which turns off bounds checking as well.

    Active Set

    The active set implementation is approximately the same number of lines as the Cython implementation, and is perhaps more cleanly structured (via an explicit composite type ActiveState) that maintains a given active dual variable's parameters. It is also trivial to break repeated code into separated functions that can be trivially inlined by LLVM, while this is difficult for arbitrary arguments in Cython.

    One-based indexing in Julia also made the algorithm somewhat cleaner.

    Performance

    We see that exactly the same algorithm in Julia is uniformly faster when compared to an equivalent Cython implementation.

    For the active set implementations, Julia is anywhere between 5x and 300x faster on equivalent regression problems.

    For the linear PAVA implementation, Julia is between 1.1x and 4x faster.

    This certainly indicates Julia is a very attractive choice for performance-critical machine learning applications.

    See the iJulia notebook for more information on how these performance measurements were obtained.

    Discuss this article on HackerNews.


              The Performance of Decision Tree Evaluation Strategies   

    UPDATE: Compiled evaluation is now implemented for scikit-learn regression tree/ensemble models, available at https://github.com/ajtulloch/sklearn-compiledtrees or pip install sklearn-compiledtrees.

    Our previous article on decision trees dealt with techniques to speed up the training process, though often the performance-critical component of the machine learning pipeline is the prediction side. Training takes place offline, whereas predictions are often in the hot path - consider ranking documents in response to a user query a-la Google, Bing, etc. Many candidate documents need to be scored as quickly as possible, and the top k results returned to the user.

    Here, we'll focus on on a few methods to improve the performance of evaluating an ensemble of decision trees - encompassing random forests, gradient boosted decision trees, AdaBoost, etc.

    There are three methods we'll focus on here:

    • Recursive tree walking (naive)
    • Flattening the decision tree (flattened)
    • Compiling the tree to machine code (compiled)

    We'll show that choosing the right strategy can improve evaluation time by more than 2x - which can be a very significant performance improvement indeed.

    All code (implementation, drivers, analysis scripts) are available on GitHub at the decisiontrees-performance repository.

    Naive Method

    Superficially, decision tree evaluation is fairly simple - given a feature vector, recursively walk down the tree, using the given feature vector to choose whether to proceed down the left branch or the right branch at each point. When we reach a leaf, we just return the value at the leaf.

    In Haskell,

    In C++,

    From now on, we'll focus on the C++ implementation, and how we can speed this up.

    This approach has a few weaknesses - data cache behavior is pretty-much the worst case, since we're jumping all over our memory to go from one node to the next. Given the cost of cache misses on modern CPU architectures, we'll most likely see some performance improvements from optimizing this approach.

    Flattened Tree Method

    A nice trick to improve cache locality is to lay out data out in a flattened form, and jumping in between locations in our flattened representation. This is analogous to representing a binary heap as an array.

    The technique is just to flatten the tree out, and so moving from the parent to the child in our child will often mean accessing memory in the same cache line - and given the cost of cache misses on modern CPU architectures, minimizing these can lead to significant performance improvements. Thus our

    We implement two strategies along this approach:

    • Piecewise flattened, where for an ensemble of weak learners, we store a vector of flattened trees - with one element for each weak learner.
    • Contiguous flattened, where we concatenate the flattened representation of each weak learner into a single vector, and store the indices of the root of each learner. In some circumstances, this may improve cache locality even more, though we see that it is outperformed in most circumstances by the piecewise flattened approach.

    Our implementation is given below:

    Compiled Tree Method

    A really cool technique that has been known for years is generating C code representing a decision tree, compiling it into a shared library, and then loading the compiled decision tree function via dlopen(3). I found a 2010 UWash student report describing this technique, though the earliest reference I've seen is from approximately 2000 in a presentation on Alta Vista's machine learning system (which I unfortunately cannot find online).

    The gist of this approach is to traverse the trees in the ensemble, generating C code as we go. For example, if a regression stump has the logic "return 0.02 if feature 5 is less than 0.8, otherwise return 0.08.", we would generate the code:

    float evaluate(float* feature_vector) {
      if (feature_vector[5] < 0.8) {
        return 0.02;
      } else {
        return 0.08;
      }
    }
    

    For example, here is the code generated by a randomly constructed ensemble with two trees:

    The core C++ function used to generate this is given below:

    For evaluation, we just use the dlopen and dlsym to extract a function pointer from the generated .so file.

    We can examine the evaluation time of each strategy at a fixed tree depth and number of features, and see that at these levels, we have that the compiled strategy is significantly faster. Note that strategies scale roughly linearly in the number of weak learners, as expected.

    Evaluation time of different strategies for fixed depth and number of features

    Performance Evaluation

    As the student report indicates, the relative performance of each strategy depends on the size of the trees, the number of trees, and the number of features in the given feature vector.

    Our methodology is to generate a random ensemble with a given depth, number of trees, and number of features, construct the evaluators of this tree for each strategy, and measure the evaluation time of each strategy across a set of randomly generated feature vectors. (We also check correctness of the implementations via a QuickCheck style test that each strategy computes the same result for a given feature vector).

    \begin{align} \text{num_trees} &\in [1, 1000] \\ \text{depth} &\in [1, 6] \\ \text{num_features} &\in [1, 10000] \end{align}

    Visualization

    We look at trellis plots of the evaluation time against number of trees, for the various evaluation strategies.

    The following diagram is the entire parameter space explored (click for more detail).

    Regression

    To quantify these effects on the cost of evaluation for the different algorithms, we fit a linear model against these covariates, conditioned on the algorithm used. Conceptually, we are just splitting our dataset by the algorithm used, and fitting a separate linear model on each of these subsets.

    (as an aside - the R formula syntax is a great example of a DSL done right.)

    We see $R^2$ values ~0.75-0.85 with all coefficients, with almost all coefficients statistically different from zero at the 0.1% level - so we can draw some provisional inferences from this model.

    We note:

    • the compiled tree strategy is much more sensitive to the depth of the decision tree, which aligns with observations made in the student report.
    • the compiled tree strategy and the naive strategy are also more sensitive to the number of trees than the flattened evaluation strategy. Thus for models with huge numbers of trees, the flattened evaluation may be the best.
    • The intercept term for the compiled tree is the most negative - thus for 'small' models - low number of trees of small depth, the compiled tree approach may be the best evaluation strategy.

    Conclusions

    We've implemented and analyzed the performance of a selection of decision tree evaluation strategies. It appears there are two main conclusions:

    • For small models - <200 or so trees with average depth <2, the compiled evaluation strategy is the fastest.
    • For larger models, the piecewise flattened evaluation strategy is most likely the fastest.
    • Choosing the right evaluation strategy can, in the right places, improve performance by greater than 2x.

    Next, I'll look at implementing these methods in some commonly used open-source ML packages, such as scikit-learn.


              Elements of Statistical Learning - Chapter 2 Solutions   

    The Stanford textbook Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is an excellent (and freely available) graduate-level text in data mining and machine learning. I'm currently working through it, and I'm putting my (partial) exercise solutions up for anyone who might find them useful. The first set of solutions is for Chapter 2, An Overview of Supervised Learning, introducing least squares and k-nearest-neighbour techniques.

    Exercise Solutions

    See the solutions in PDF format (source) for a more pleasant reading experience. This webpage was created from the LaTeX source using the LaTeX2Markdown utility - check it out on GitHub.

    Overview of Supervised Learning

    Exercise 2.1

    Suppose that each of $K$-classes has an associated target $t_k$, which is a vector of all zeroes, except a one in the $k$-th position. Show that classifying the largest element of $\hat y$ amounts to choosing the closest target, $\min_k \| t_k - \hat y \|$ if the elements of $\hat y$ sum to one.

    Proof

    The assertion is equivalent to showing that \begin{equation} \text{argmax}_i \hat y_i = \text{argmin}_k \| t_k - \hat y \| = \text{argmin}_k \|\hat y - t_k \|^2 \end{equation} by monotonicity of $x \mapsto x^2$ and symmetry of the norm.

    WLOG, let $\| \cdot \|$ be the Euclidean norm $\| \cdot \|_2$. Let $k = \text{argmax}_i \hat y_i$, with $\hat y_k = \max y_i$. Note that then $\hat y_k \geq \frac{1}{K}$, since $\sum \hat y_i = 1$.

    Then for any $k' \neq k$ (note that $y_{k'} \leq y_k$), we have \begin{align} \| y - t_{k'} \|_2^2 - \| y - t_k \|_2^2 &= y_k^2 + \left(y_{k'} - 1 \right)^2 - \left( y_{k'}^2 + \left(y_k - 1 \right)^2 \right) \\ &= 2 \left(y_k - y_{k'}\right) \\ &\geq 0 \end{align} since $y_{k'} \leq y_k$ by assumption.

    Thus we must have

    \begin{equation} \label{eq:6} \text{argmin}_k \| t_k - \hat y \| = \text{argmax}_i \hat y_i \end{equation}

    as required.

    Exercise 2.2

    Show how to compute the Bayes decision boundary for the simulation example in Figure 2.5.

    Proof

    The Bayes classifier is \begin{equation} \label{eq:2} \hat G(X) = \text{argmax}_{g \in \mathcal G} P(g | X = x ).
    \end{equation}

    In our two-class example $\textbf{orange}$ and $\textbf{blue}$, the decision boundary is the set where

    \begin{equation} \label{eq:5} P(g=\textbf{blue} | X = x) = P(g =\textbf{orange} | X = x) = \frac{1}{2}. \end{equation}

    By the Bayes rule, this is equivalent to the set of points where

    \begin{equation} \label{eq:4} P(X = x | g = \textbf{blue}) P(g = \textbf{blue}) = P(X = x | g = \textbf{orange}) P(g = \textbf{orange}) \end{equation}

    As we know $P(g)$ and $P(X=x|g)$, the decision boundary can be calculated.

    Exercise 2.3

    Derive equation (2.24)

    Proof

    TODO

    Exercise 2.4

    Consider $N$ data points uniformly distributed in a $p$-dimensional unit ball centered at the origin. Show the the median distance from the origin to the closest data point is given by \begin{equation} \label{eq:7} d(p, N) = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p} \end{equation}

    Proof

    Let $r$ be the median distance from the origin to the closest data point. Then \begin{equation} \label{eq:8} P(\text{All $N$ points are further than $r$ from the origin}) = \frac{1}{2} \end{equation} by definition of the median.

    Since the points $x_i$ are independently distributed, this implies that \begin{equation} \label{eq:9} \frac{1}{2} = \prod_{i=1}^N P(\|x_i\| > r) \end{equation} and as the points $x_i$ are uniformly distributed in the unit ball, we have that \begin{align} P(\| x_i \| > r) &= 1 - P(\| x_i \| \leq r) \\ &= 1 - \frac{Kr^p}{K} \\ &= 1 - r^p \end{align}

    Putting these together, we obtain that \begin{equation} \label{eq:10} \frac{1}{2} = \left(1-r^p \right)^{N}
    \end{equation} and solving for $r$, we have \begin{equation} \label{eq:11} r = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p} \end{equation}

    Exercise 2.5

    Consider inputs drawn from a spherical multivariate-normal distribution $X \sim N(0,\mathbf{1}_p)$. The squared distance from any sample point to the origin has a $\chi^2_p$ distribution with mean $p$. Consider a prediction point $x_0$ drawn from this distribution, and let $a = \frac{x_0}{\| x_0\|}$ be an associated unit vector. Let $z_i = a^T x_i$ be the projection of each of the training points on this direction.

    Show that the $z_i$ are distributed $N(0,1)$ with expected squared distance from the origin 1, while the target point has expected squared distance $p$ from the origin.

    Hence for $p = 10$, a randomly drawn test point is about 3.1 standard deviations from the origin, while all the training points are on average one standard deviation along direction a. So most prediction points see themselves as lying on the edge of the training set.

    Proof

    Let $z_i = a^T x_i = \frac{x_0^T}{\| x_0 \|} x_i$. Then $z_i$ is a linear combination of $N(0,1)$ random variables, and hence normal, with expectation zero and variance

    \begin{equation} \label{eq:12} \text{Var}(z_i) = \| a^T \|^2 \text{Var}(x_i) = \text{Var}(x_i) = 1 \end{equation} as the vector $a$ has unit length and $x_i \sim N(0, 1)$.

    For each target point $x_i$, the squared distance from the origin is a $\chi^2_p$ distribution with mean $p$, as required.

    Exercise 2.6

    1. Derive equation (2.27) in the notes.
    2. Derive equation (2.28) in the notes.

    Proof

    1. We have \begin{align} EPE(x_0) &= E_{y_0 | x_0} E_{\mathcal{T}}(y_0 - \hat y_0)^2 \\ &= \text{Var}(y_0|x_0) + E_{\mathcal T}[\hat y_0 - E_{\mathcal T} \hat y_0]^2 + [E_{\mathcal T} - x_0^T \beta]^2 \\ &= \text{Var}(y_0 | x_0) + \text{Var}_\mathcal{T}(\hat y_0) + \text{Bias}^2(\hat y_0). \end{align} We now treat each term individually. Since the estimator is unbiased, we have that the third term is zero. Since $y_0 = x_0^T \beta + \epsilon$ with $\epsilon$ an $N(0,\sigma^2)$ random variable, we must have $\text{Var}(y_0|x_0) = \sigma^2$. The middle term is more difficult. First, note that we have \begin{align} \text{Var}_{\mathcal T}(\hat y_0) &= \text{Var}_{\mathcal T}(x_0^T \hat \beta) \\ &= x_0^T \text{Var}_{\mathcal T}(\hat \beta) x_0 \\ &= E_{\mathcal T} x_0^T \sigma^2 (\mathbf{X}^T \mathbf{X})^{-1} x_0 \end{align} by conditioning (3.8) on $\mathcal T$.
    2. TODO

    Exercise 2.7

    Consider a regression problem with inputs $x_i$ and outputs $y_i$, and a parameterized model $f_\theta(x)$ to be fit with least squares. Show that if there are observations with tied or identical values of $x$, then the fit can be obtained from a reduced weighted least squares problem.

    Proof

    This is relatively simple. WLOG, assume that $x_1 = x_2$, and all other observations are unique. Then our RSS function in the general least-squares estimation is

    \begin{equation} \label{eq:13} RSS(\theta) = \sum_{i=1}^N \left(y_i - f_\theta(x_i) \right)^2 = \sum_{i=2}^N w_i \left(y_i - f_\theta(x_i) \right)^2 \end{equation}

    where \begin{equation} \label{eq:14} w_i = \begin{cases} 2 & i = 2 \\ 1 & \text{otherwise} \end{cases} \end{equation}

    Thus we have converted our least squares estimation into a reduced weighted least squares estimation. This minimal example can be easily generalised.

    Exercise 2.8

    Suppose that we have a sample of $N$ pairs $x_i, y_i$, drawn IID from the distribution such that \begin{align} x_i \sim h(x), \\ y_i = f(x_i) + \epsilon_i, \\ E(\epsilon_i) = 0, \\ \text{Var}(\epsilon_i) = \sigma^2. \end{align} We construct an estimator for $f$ linear in the $y_i$, \begin{equation} \label{eq:16} \hat f(x_0) = \sum_{i=1}^N \ell_i(x_0; \mathcal X) y_i \end{equation} where the weights $\ell_i(x_0; X)$ do not depend on the $y_i$, but do depend on the training sequence $x_i$ denoted by $\mathcal X$.

    1. Show that the linear regression and $k$-nearest-neighbour regression are members of this class of estimators. Describe explicitly the weights $\ell_i(x_0; \mathcal X)$ in each of these cases.
    2. Decompose the conditional mean-squared error \begin{equation} \label{eq:17} E_{\mathcal Y | \mathcal X} \left( f(x_0) - \hat f(x_0) \right)^2 \end{equation} into a conditional squared bias and a conditional variance component. $\mathcal Y$ represents the entire training sequence of $y_i$.
    3. Decompose the (unconditional) MSE \begin{equation} \label{eq:18} E_{\mathcal Y, \mathcal X}\left(f(x_0) - \hat f(x_0) \right)^2 \end{equation} into a squared bias and a variance component.
    4. Establish a relationship between the square biases and variances in the above two cases.

    Proof

    1. Recall that the estimator for $f$ in the linear regression case is given by \begin{equation} \label{eq:19} \hat f(x_0) = x_0^T \beta \end{equation} where $\beta = (X^T X)^{-1} X^T y$. Then we can simply write \begin{equation} \label{eq:20} \hat f(x_0) = \sum_{i=1}^N \left( x_0^T (X^T X)^{-1} X^T \right)_i y_i. \end{equation} Hence \begin{equation} \label{eq:21} \ell_i(x_0; \mathcal X) = \left( x_0^T (X^T X)^{-1} X^T \right)_i. \end{equation} In the $k$-nearest-neighbour representation, we have \begin{equation} \label{eq:22} \hat f(x_0) = \sum_{i=1}^N \frac{y_i}{k} \mathbf{1}_{x_i \in N_k(x_0)} \end{equation} where $N_k(x_0)$ represents the set of $k$-nearest-neighbours of $x_0$. Clearly, \begin{equation} \label{eq:23} \ell_i(x_0; \mathcal X) = \frac{1}{k} \mathbf{1}_{x_i \in N_k(x_0)} \end{equation}

    2. TODO

    3. TODO
    4. TODO

    Exercise 2.9

    Compare the classification performance of linear regression and $k$-nearest neighbour classification on the zipcode data. In particular, consider on the 2's and 3's, and $k = 1, 3, 5, 7, 15$. Show both the training and test error for each choice.

    Proof

    Our implementation in R and graphs are attached.

    Exercise 2.10

    Consider a linear regression model with $p$ parameters, fitted by OLS to a set of trainig data $(x_i, y_i)_{1 \leq i \leq N}$ drawn at random from a population. Let $\hat \beta$ be the least squares estimate. Suppose we have some test data $(\tilde x_i, \tilde y_i)_{1 \leq i \leq M}$ drawn at random from the same population as the training data. If $R_{tr}(\beta) = \frac{1}{N} \sum_{i=1}^N \left(y_i \beta^T x_i \right)^2$ and $R_{te}(\beta) = \frac{1}{M} \sum_{i=1}^M \left( \tilde y_i - \beta^T \tilde x_i \right)^2$, prove that \begin{equation} \label{eq:15} E(R_{tr}(\hat \beta)) \leq E(R_{te}(\hat \beta)) \end{equation} where the expectation is over all that is random in each expression.


              Could fund strategies be better managed by Intelligent ML/Ai based platforms?   

    @tomn wrote:

    I would be interested in knowing what your views are around a platform which could run any fund strategy and optimise it for Portfolio performance metrics? If such a platform were to exist, it would offer Asset Managers the ability to focus on AUM whilst the the Machine Learning based automation takes care of the fund management and produces optimal performance. This would fundamentally alter the value offering - humans are great at relationships ie managing inflows of AUM, machines are better at understanding data and automation - ie processing market price data to evaluate optimal investing strategies and returns.

    Posts: 6

    Participants: 3

    Read full topic


              Machine Learning Systems Developer - Cisco - San Jose, CA   
    We Are Cisco. Overall, your role will be to ensure the smooth operation of our internal data and machine learning systems....
    From Cisco Systems - Tue, 06 Jun 2017 17:10:47 GMT - View all San Jose, CA jobs
              SANSA 0.2 (Semantic Analytics Stack) Released   
    The AKSW and Smart Data Analytics groups are happy to announce SANSA 0.2 – the second release of the Scalable Semantic Analytics Stack. SANSA employs distributed computing for semantic technologies in order to allow scalable machine learning, inference and querying capabilities … Continue reading
              SML-Bench 0.2 Released   
    Dear all, we are happy to announce the 0.2 release of SML-Bench, our Structured Machine Learning benchmark framework. SML-Bench provides full benchmarking scenarios for inductive supervised machine learning covering different knowledge representation languages like OWL and Prolog. It already comes … Continue reading
              Machine Learning Scientist - RSVP Technologies Inc. - Ontario   
    Minimum 3 years of programming experience in Java, C/C++, Python or other languages. Excellent literature survey, reading and writing skills in English....
    From RSVP Technologies Inc. - Thu, 15 Jun 2017 10:38:49 GMT - View all Ontario jobs
              ADAS Software Engineer - Robotics/Machine Learning - Intel - San Jose, CA   
    Development of middleware components for highly efficient embedded systems. Intel is looking for a skilled Automated Driving Software Engineer who will design...
    From Intel - Thu, 22 Jun 2017 10:37:20 GMT - View all San Jose, CA jobs
              Tuputech Building Machine Learning into User-generated Content Moderation   
    GUANGZHOU, China, June 29, 2017 /PRNewswire/ -- With a quarter of the world's population using social media to spread millions of pieces of information, social media websites are now under tremendous pressures of finding ways to better police the tidal wave of content published daily....
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Jason Brownlee   
    Very nice Candida.
              Comment on How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda by Candida   
    scipy: 0.19.0 numpy: 1.12.1 matplotlib: 2.0.2 pandas: 0.20.1 statsmodels: 0.8.0 sklearn: 0.18.1
              ADAS Engineer I - Hitachi Automotive Systems Americas, Inc. - Sunnyvale, CA   
    Design image processing system with collaboration between Hitachi group companies and/or other technical partners by utilizing machine learning technologies....
    From Hitachi - Thu, 11 May 2017 18:14:52 GMT - View all Sunnyvale, CA jobs
              Senior Director - Lumada- Hitachi Insight Group - Hitachi Data Systems - Waltham, MA   
    We use machine learning to optimize the production and operation of machines, fleets, &amp; factories. You must possess a unique blend of business and technical...
    From Hitachi Data Systems - Sat, 04 Mar 2017 02:09:35 GMT - View all Waltham, MA jobs
                 
    8對8連線對戰冇難度。#中大經濟 #CUHK #Economics is likely the first economics department in the world to have a teaching lab fully equipped with GPUs for machine learning instruction.
              Associate Software Engineer - Daimler - Bellevue, WA   
    MBRDNA is headquartered in Silicon Valley, California, with key areas of Advanced Interaction Design, Digital User Experience, Machine Learning, Autonomous...
    From Daimler - Tue, 27 Jun 2017 20:24:29 GMT - View all Bellevue, WA jobs
              Here's the top 20 jobs in danger of being replaced by robots in the next 20 years   
    Here's the top 20 jobs in danger of being replaced by robots in the next 20 years
    Will a robot someday steal your job?

    Day by day, the machines are getting smarter and more efficient than people. Employers find it more practical to use them because their basic AI (artificial intelligence) is substantially doing tasks that a person used to do.

    A computer program usually does the job faster, more accurately, for less money, and without any health insurance costs. They don't charge for overtime nor ask for paid leaves.

    A recent Oxford University study compared the tasks of nearly a thousand jobs to the predicted future ability of robotic technologies, specifically in the fields of "Machine Learning" and "Machine Robotics".

    Using a methodology called Gaussian process, the Oxford researchers measured the "probability of computerization" of various professions and found out that robots have the potential to substitute for human brains and hands.

    Administrative, clerical and production workers might be the first to be replaced by robots in the next 10 to 20 years, the study added.


    Here are the top 20 jobs most likely to be replaced by robots:

    20. Electrical and electronic equipment assemblers
    19. Postal service workers
    18. Jewelers and precious stone and metal workers
    17. Restaurant cooks
    16. Grinding and polishing workers
    15. Cashiers
    14. Bookkeepers
    13. Legal secretaries
    12. Fashion models
    11. Drivers
    10. Credit analysts
    9. Milling and planing machine setters, operators, and tenders
    8. Packaging and filling-machine operators and tenders
    7. Procurement clerks
    6. Umpires and referees
    5. Tellers
    4. Loan officers
    3. Timing-device assemblers and adjusters
    2. Tax preparers
    1. Telemarketers

    Runner-ups will be Insurance Appraisers for Auto Damage, Order Clerks, Brokerage Clerks, Insurance Claims and Policy Processing Clerks, Data Entry Keyers, Library Technicians, New Accounts Clerks, Photographic Process Workers and Processing Machine Operators, Cargo and Freight Agents, Watch Repairers, Insurance Underwriters, Mathematical Technicians, Sewers, Hand and Title Examiners, Abstractors, and Searchers.


    WHAT DO YOU THINK OF THIS POST?
    Share your ideas by commenting.


              (USA-WA-Redmond) Service Engineer 2   
    “Cybersecurity is like going to the gym, you can’t get better by watching others, you’ve got to go there every day.” ~Satya Nadella, CEO, Microsoft At Microsoft we believe in the strength of our operational security posture to protect our customer data, supply chain, devices and our intellectual property. If that sounds like an important job to you – you’re right – and that’s why we need your help! We have an exciting opportunity for a Service Engineer who is passionate about security, reliability and design. We are looking for a Service Engineer who is ready to be a part of a team that moves fast, leverages continuous delivery practices, and is centered at the customer experience. RESPONSIBILITIES: Set-up •New server set-up •Server upgrades (release management) •Capacity planning •Data ingestion Maintain •Asset tracking and configuration management •Patching •System health telemetry and monitoring •System troubleshooting (incident management) •Change management Improve •Automation and optimization •Problem management QUALIFICATIONS: •4+ years experience in Linux and Windows basic administration •Demonstrated expertise in web services, virtualization and cloud concepts •Site Reliability Engineering (SRE) •Outstanding problem-solving skills and passion to solve hard problems as part of a team •Experience in automation, specifically related to deployment, recovery, or other manual processes Experience in many of the following: •ETL (Extract, Transform, Load) process •Solr •MySQL / MS SQL Server •SQL, including queries •Syslog •C#, PowerShell PREFERRED, NOT REQUIRED: •User and Entity Behavior Analytic (UEBA) AND/OR Machine Learning experience •SIEM Familiarity •Knowledge of security software & network/network security components •At least one industry relevant information security certificate (GCIA, GCIH, CISSP, OSCP) ISRM ISRMJOBS Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Information technology (IT) & operations
              (USA-WA-Redmond) Principal Program Manager   
    Data powers an increasing range of applications, transforming not just the technology industry but society at large. To that end, cloud computing is fundamentally transforming businesses as they strive to derive value from their data. Responsible for all relational database products and services at Microsoft, the Database Systems (DS) organization is at the heart of this transformation: We drive the business for the most widely used relational database in the world - SQL Server - along with the industry’s leading cloud services such as Azure SQL Database and Azure SQL DW and are accountable for a significant portion of Microsoft’s overall revenue and our cloud investments reflect a rapid pace of innovation to support the next generation of computing. SQL Server on VM is core to cloud transformation. Not only does SQL Server on VM enable customers to seamlessly migrate existing enterprise workloads to Azure, but the investments we are making in building out our Hosting business allow customers to host their SQL Server workloads in multiple clouds, like Amazon, Google and Rackspace, providing customers the flexibility to run their SQL Server in the cloud of their choice. We are looking for an experienced, highly motivated program manager to lead our SQL VM and Hosting efforts end to end, driving all aspects of the business ranging from product strategy, requirements gathering and feature specification to business development! In truth, an exciting role spanning both technology and business, offering a wealth of opportunity for growth to the right individual. The core accountabilities include: • Defining the product and hosting strategy. • Driving core product requirements. • Managing engineering stakeholder relationships across Azure. • Leading growth hacking activates and collaborating with marketing on GTM and field initiatives. • Building and nurturing strategic relationships with third party cloud Hosters. • The ideal candidate will be a self-driven individual with strong communication skills. • You have a proven track record of driving a business end to end covering all efforts across product strategy, execution and business growth. • SQL Server is a must and you are comfortable engaging with customers and partners at a deep technical level providing both operational and technical guidance on how to configure, run and optimize SQL Server workloads. • Your deep technical expertise is complimented by strong business acumen, required to actively help drive business forward and represent the product during regular business rhythms. Basic Qualifications: • B.S. degree in Computer Science or IT related discipline. • 5+ years’ experience in product design or product management • 5+ years of customer facing product management, or similar experience • 1+ years’ experience to configure, run and optimize SQL Server workloads Preferred Qualifications: • Intense curiosity and willingness to question. • Love the next problem, the next experiment, the next partner. • Have a deep desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. • Get excited by the challenge of hard technical problems. • Solve problems by always leading with deep passion and empathy for customers. Experience developing big data analytic application, streaming solutions, or machine learning applications, using technologies like Hadoop, Spark, Big Query, Storm, R, Red Shift, S3, EBS, or GCS. Experience building cloud scale applications on cloud platforms like Azure, AWS or GCE. Proven experience in building and driving execution on large complex projects Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, gender, sexual orientation, gender identity or expression. Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
              (USA-WA-Redmond) Machine Learning Scientist   
    In the Learning Systems Group of the Cloud and Information Services Lab (CISL), we research, design and develop state-of-the-art machine learning algorithms, tools and systems. Our mission is to contribute to the democratization of machine learning through research. To do so, we collaborate with Microsoft Research and the academic community. We actively engage in open source software development. Our work directly informs and shapes products such as Azure ML, Azure Data Lake and Microsoft R Server. You will be a part of an enthusiastic and experienced team that takes pride in creating state-of-the-art ML technology. We are looking for people with a combination of: •Fundamental ML knowledge: You either have authored papers in relevant top-tier venues (like ICML, NIPS, KDD, CVPR, ACL, etc.) and/or have read them for pleasure. •Excellent development and implementation skills: You enjoy creating efficient, well-designed implementations of learning algorithms, finding their non-trivial applications, and conducting thorough evaluations. •Drive to democratize ML: You are both amazed by and proud of the machine learning community and its accomplishments and at the same time disappointed by its localized impact thus far. •You want to help make machine learning useful beyond the web companies across a wide array of application domains. Responsibilities include: •Identifying and solving hard, yet tractable problems to overcome to democratize machine learning. •Contribute to the state-of-the-art in machine learning and share those advances through publications in top venues. •Attract and mentor research interns. •Help design and deliver both general and domain-specific machine learning algorithms and systems. •Drive sound design and implementation through hands-on development. •Work with partner teams on the integration of machine learning technology into their products. Basic Qualifications: •B.S. degree in Computer Science or IT related discipline. •2+ years of machine learning experience •2+ years’ experience in design and problem solving •2+ years of Experience in one or more of the commonly used parallel/distributed systems/technologies (Apache Hadoop, Apache Spark, MPI, CUDA, Hive, Java, C++, C#). •1+ Publications in major conferences and/or journals Preferred Qualifications: •Intense curiosity and willingness to question •Love the next problem, the next experiment, the next partner •Have a deep desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes •Get excited by the challenge of hard technical problems •Solve problems by always leading with deep passion and empathy for customers •Strong bias for architecting for performance, scalability, usability, security, and reliability •Good communicator with the ability to analyze and clearly articulate complex issues and technologies understandably and engagingly Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
              (USA-WA-Redmond) Senior Software Engineer   
    Do you have an insatiable appetite to learn about the latest security vulnerabilities? Are you a great programmer? If so, we want to talk to YOU! As a software engineer you will work very closely with the cloud and enterprise data group to build software to discover security vulnerabilities in Microsoft products. We are looking for candidates who share our passion to make our software very secure. You will work with the best engineers using the latest Azure services, machine learning and other cutting-edge technologies. You will have an opportunity to work in a dynamic environment that challenges you to think outside the box and presents a diverse set of engineering problems to solve. You will be challenged, but it will be worth it. Are you READY? Must be self-motivated - Great communication and collaboration skills - Customer focused, and familiar with agile development methodologies. If you have any questions or need additional information, please let me know. Thank you for your interest in our opportunity! I look forward to talking with you soon again. Basic qualifications: B.S. degree in Computer Science or related IT. 4+ years’ experience in development, problem solving, communication, and collaboration skills 4+ years’ experience in C# OR .Net OR Java or C+ + Preferred Qualifications: Intense curiosity and willingness to question Love the next problem, the next experiment, the next partner Have a deep desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes Get excited by the challenge of hard technical problems Solve problems by always leading with deep passion and empathy for customers Proficiency in C#, SQL Server, and network protocols is preferred with expertise in troubleshooting and debugging skills. Multithreaded and parallel programming experience. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
              Write a research paper from scratch till publication by zwebmaster   
    Given a research paper on Machine Learning, Specifically DEEP LEARNING. (you can also select a research paper yourself based on your own research): You have to do academic research and suggest an honest genuine improvement to this research paper and then write a new paper... (Budget: $250 - $750 USD, Jobs: Artificial Intelligence, Machine Learning, Research)
              Prediction.IO — MVC for machine learning   

    Some time ago when I needed to do simple, proof-of-concept spam recognition and I was looking for simple framework in scala or python I met prediction.io by chance. Quick glimpse on page make me interested and I decided it is worth to try it. My impression of this tool was really great. Simple, easy to Read more about Prediction.IO — MVC for machine learning[…]

    The post Prediction.IO — MVC for machine learning appeared first on DataCentric.


              Comment on Xometry by Xometry Receives $15MM Investment from BMW, GE, and Highland Capital - 3D Printing Media Network   
    […] Xometry, the leading on-demand manufacturing platform, received $15MM in funding led by BMW i Ventures, with participation from existing investors including GE Ventures and Highland Capital Partners. Further fueling Xometry’s rapid market expansion, the latest round of funding will accelerate Xometry’s investment in its machine learning-based software platform, manufacturing partner network, and sales organization. […]
              Senior Software Engineer   
    Senior Software Engineer - Telekomunikācijas

    What you will do:

    − You will co-develop on the Advanon platform or related applications, using your own strengths

    − You will present and evaluate new technologies and architectures

    −  You will contribute within all development life-cycles

    − You will work together in small, highly motivated, interdisciplinary teams

    − You will face unforeseen situations where you are always welcome to bring in your inputs on finding solutions. So you will be highly challenged!

     

    What we expect:
    − You have a degree in Computer Science, Web/Software Engineering, or related field

    − You are experienced in developing web applications (2 years min., 3-5 preferred)

    − You  have valid working permission in the EU / Switzerland

    − You have a combination of the following skills:

    Must have Ruby /  Rails OR React, Redux, (optionally React Native & Node)
    Relational Databases, PostgreSQL and ORM technologies
    Knowledge of micro-service and REST API architectures
    Interest in Machine Learning a plus

     

    What we offer:
    − Competitive salary + share option possibilities

    − Spend time with an innovative and passionate group of professionals building an exceptional product

    − ADVAPERKS, an awesome benefits package:

    Unlimited holiday, 2 weeks per year flight/hotel for holiday is paid, annual transport passes, brekkie/TGIF events, budget for free-time courses, 3-6 months parental leave, remote work time, and even a laundry service option!

     

    Do you fit the profile we are seeking? Do you love spending time with a motivated and passionate team? Check out our website and apply here: https://advanon.typeform.com/to/tCxGyo

     

    We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

      Darba veids:
      Pilna laika

              Tuputech Building Machine Learning Into User-generated Content Moderation   


              Optimizing hospital-wide patient scheduling : early classification of diagnosis-related groups through machine learning / Daniel Gartner   
    cover imageRA971.8 .G37 2014
              Horizon machine learning demo   

    Horizon machine learning demo

    MingRuiWangHorizon machine learning demo. 0. last run an hour ago · Python notebook · 17 views using data from MNIST data ·. Public. [Link to Full Article]

              131: Strategy development—powered by machine learning w   

    131: Strategy development—powered by machine learning w

    You’ll recall, I had Andy Kershner on the podcast a few episodes back. [Link to Full Article]

              Salesforce Extends Einstein Machine Learning Features for Developers   

    Salesforce Extends Einstein Machine Learning Features for Developers

    Applications that use machine learning can acquire knowledge based on new data without being programmed. [Link to Full Article]

              Latest Google Photos Update Brings Machine Learning Sharing Features   

    <! [Link to Full Article]

              AMD launches the world's fastest graphics card for machine learning development and advanced …   

    <! [Link to Full Article]

              Google Photos can now use machine learning to share your pics   

    <! [Link to Full Article]

              Researchers Think They Can Use Twitter to Spot Riots Before Police   

    Researchers Think They Can Use Twitter to Spot Riots Before Police

    Researchers in the UK used machine learning algorithms to analyze 1. [Link to Full Article]

              AusPost trials machine learning to manage unpaid bills   

    AusPost trials machine learning to manage unpaid bills

    Australia Post has quietly created an email add-on tool that uses machine learning to find unpaid bills and itemise when they need to be paid. [Link to Full Article]

              As Machine Learning and AI Perform Magic, How can UX Professionals He…   

    As Machine Learning and AI Perform Magic, How can UX Professionals He…

    #UXPA2017 www.uxpa2017.org As Machine Learning and AI Perform Magic, How #UXPA2017 www.uxpa2017.org Agenda 1. [Link to Full Article]

              Machine Learning's Mediocre Gains   

    Machine Learning's Mediocre Gains

    Hedge funds using vast amounts of data, computing power, and machinelearning techniques to make money are drawing investors’ attention. [Link to Full Article]

              How Artificial Intelligence Is Taking on Ransomware   

    How Artificial Intelligence Is Taking on Ransomware

    For that, security researchers turn to machine learning, a form of artificial intelligence. [Link to Full Article]

              El aprendizaje de las maquinas será el tema central de Expoelearning 2017   
    El machine learning o aprendizaje automático de las maquinas está llegando a la educación. La personalización masiva pasa por aprender de los datos y automatizar procesos de aprendizaje mejor incluso de lo que un humano lo haría. Suena futurista ¿verdad? Pues es lo que nos proponen este año en Expoelearning cuyo tema central girará entorno [...]
              Will A.I. and machine learning make everyone a musician?   
    Music and other live performance art has always been at the cutting edge of technology so it’s no surprise that artificial intelligence and machine learning are pushing its boundaries. As AI’s ability to manage key elements of the creative process continue to evolve, should artists be worried about the machines taking over? Probably not, says […]
              Machine Learning Consultant (Remote 50% travel)   

              Jobs at Amazon   
    I do not normally post job adverts, but this was very specifically targeted to “applied time series candidates” so I thought it might be of sufficient interest to readers of this blog.Here is an excerpt from an email I received from someone at Amazon: Amazon is aggressively recruiting in the data sciences, and we have found that applied economists compare quite favorably with the machine learning specialists and statisticians that are sometimes recruited for such roles.
              Congratulations to Dr Souhaib Ben Taieb   
    Souhaib Ben Taieb has been awarded his doctorate at the Université libre de Bruxelles and so he is now officially Dr Ben Taieb! Although Souhaib lives in Brussels, and was a student at the Université libre de Bruxelles, I co-supervised his doctorate (along with Professor Gianluca Bontempi). Souhaib is the 19th PhD student of mine to graduate. His thesis was on “Machine learning strategies for multi-step-ahead time series forecasting” and is now available online.
              Free books on statistical learning   
    Hastie, Tibshirani and Friedman’s Elements of Statistical Learning first appeared in 2001 and is already a classic. It is my go-to book when I need a quick refresher on a machine learning algorithm. I like it because it is written using the language and perspective of statistics, and provides a very useful entry point into the literature of machine learning which has its own terminology for statistical concepts. A free downloadable pdf version is available on the website.
              Looking for a new post-doc   
    We are looking for a new post-doctoral research fellow to work on the project “Macroeconomic Forecasting in a Big Data World”. Details are given at the link below jobs.monash.edu.au/jobDetails.asp?sJobIDs=519824 This is a two year position, funded by the Australian Research Council, and working with me, George Athanasopoulos, Farshid Vahid and Anastasios Panagiotelis. We are looking for someone with a PhD in econometrics, statistics or machine learning, who is well-trained in computationally intensive methods, and who has a background in at least one of time series analysis, macroeconomic modelling, or Bayesian econometrics.
              Boosting multi-step autoregressive forecasts   
    Multi-step forecasts can be produced recursively by iterating a one-step model, or directly using a specific model for each horizon. Choosing between these two strategies is not an easy task since it involves a trade-off between bias and estimation variance over the forecast horizon. Using a nonlinear machine learning model makes the tradeoff even more difficult. To address this issue, we propose a new forecasting strategy which boosts traditional recursive linear forecasts with a direct strategy using a boosting autoregression procedure at each horizon.
              OTexts.org is launched   
    The publishing platform I set up for my forecasting book has now been extended to cover more books and greater functionality. Check it out at www.otexts.org. So far, we have three complete books: Forecasting: principles and practice, by Rob J Hyndman and George Athanasopoulos Statistical foundations of machine learning, by Gianluca Bontempi and Souhaib Ben Taieb Modal logic of strict necessity and possbibility, by Evgeni Latinov and one book currently being written:
              Out-of-sample one-step forecasts   
    It is common to fit a model using training data, and then to evaluate its performance on a test data set. When the data are time series, it is useful to compute one-step forecasts on the test data. For some reason, this is much more commonly done by people trained in machine learning rather than statistics. If you are using the forecast package in R, it is easily done with ETS and ARIMA models.
              Learn Machine Learning at Stanford for free   
    Andrew Ng’s machine learning course at Stanford is being offered free to anyone online in the (northern) fall of 2011. I’ve seen some of the notes from this course and it looks to be an excellent broad introduction to machine learning and data mining. For example, support vector machines, neural networks, kernels, clustering, dimension reduction, etc.Statisticians should know something about this area (just as computer scientists working in machine learning should know some statistical modelling), and this would be a great way to learn it.
              Update on a StackExchange site for statistical analysis   
    About six weeks ago, I proposed that there should be a Stack Exchange site for questions on data analysis, statistics, data mining, machine learning, etc. I can finally report that there has been substantial progress on this. The formal proposal is now at Area 51 where the scope of the new site is being developed and voted on in a democratic way. The site has been in a private beta state for a week or so, but is now open for anyone to join in.
              A StackExchange site for statistical analysis?   
    Regular readers of this site will know I’m a fan of using Stack Overflow for questions about LaTeX, R and other areas of programming. Now the people who produce Stack Overflow are planning on setting up several new sites for asking questions about other topics, and are seeking proposals. I have proposed that there should be a site for questions on data analysis, statistics, data mining, machine learning, etc.
              New Artificial Intelligence Hub At CMU Aims To Make Pittsburgh A World Leader In AI   
    Faculty and staff from several schools at Carnegie Mellon University are joining forces in an effort to accelerate the science of Artificial Intelligence. University leaders said they hope that by pulling together more than 100 faculty through the creation of CMU AI , it will maintain the university’s role as a leader in the field. CMU School of Computer Science dean Andrew Moore said the “confederation” of faculty and students from various disciplines, which will allow the school to offer what he calls “full stack” education and research. “That means [the students] need to be able to hang out and work on projects in labs not just with the technology experts on specific parts of AI, like machine learning or computer vision, but they have seen examples of putting everything together,” Moore said. Moore said the university has been able to build great AI systems that combine technologies from several different disciplines. However, they have been dependent on individual faculty members
              Senior-Inventive Scientist (Labs- Big Data Research) - AT&T - Bedminster, NJ   
    Our rich customer and network data allows analysts to positively affect business outcomes and to pursue methodological research (comparing machine learning...
    From AT&T - Sun, 25 Jun 2017 06:36:56 GMT - View all Bedminster, NJ jobs
              Developer with ML background for Machine Learning development - SAP - Palo Alto, CA   
    Experience in statistical modeling, machine learning, or data mining practice. Familiar with one or more machine learning or statistical modeling tools....
    From SAP - Sat, 20 May 2017 05:19:08 GMT - View all Palo Alto, CA jobs
              Senior Developer - SAP - Palo Alto, CA   
    Knowledge in Business Intelligence, Machine Learning. You will have the chance to work with real-world business problems....
    From SAP - Fri, 14 Apr 2017 22:14:10 GMT - View all Palo Alto, CA jobs
              Deep Leaning Expert for Machine Learning Development - SAP - Palo Alto, CA   
    Experience in statistical modeling, machine learning, or data mining practice. Familiar with one or more machine learning modeling tools and platform....
    From SAP - Fri, 17 Mar 2017 00:36:25 GMT - View all Palo Alto, CA jobs
              Senior-Inventive Scientist (Labs- Big Data Research) - AT&T - New York, NY   
    Our rich customer and network data allows analysts to positively affect business outcomes and to pursue methodological research (comparing machine learning...
    From AT&T - Sun, 25 Jun 2017 06:36:55 GMT - View all New York, NY jobs
              Data Scientist - Dstillery - New York, NY   
    Plus strong professional experience in Machine Learning, Statistics, Operations Research or a related field....
    From Dstillery - Wed, 07 Jun 2017 14:25:01 GMT - View all New York, NY jobs
              Data Analyst - POLICE DEPARTMENT - New York, NY   
    Ability to quickly learn new software, including Tableau, IBM Cognos, and internal. Extensive knowledge of applied statistics, analytics, machine learning, data... $70,286 - $88,213 a year
    From NYC Careers - Fri, 03 Mar 2017 11:28:27 GMT - View all New York, NY jobs
              Network Engineer - Daimler - Sunnyvale, CA   
    MBRDNA is headquartered in Silicon Valley, California, with key areas of Advanced Interaction Design, Digital User Experience, Machine Learning, Autonomous...
    From Daimler - Thu, 13 Apr 2017 05:42:50 GMT - View all Sunnyvale, CA jobs
              Senior Software Engineer - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Sat, 11 Mar 2017 00:47:45 GMT - View all New York, NY jobs
              Software Dev Engineer -- Ad Platform - Amazon Corporate LLC - New York, NY   
    Machine learning experience. What's the business opportunity? We also own internal services for launching, managing, and monitoring of those placements....
    From Amazon.com - Wed, 08 Mar 2017 06:39:18 GMT - View all New York, NY jobs
              Business Continuity / Disaster Recovery Architect - Neiman Marcus - Dallas, TX   
    Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus....
    From Neiman Marcus - Thu, 25 May 2017 22:30:52 GMT - View all Dallas, TX jobs
              Vice President, Chief Architect & Fellow, CTG - Intuit - San Diego, CA   
    And machine learning. A laser focus on outstanding business outcomes. Practices, and evangelize strategic use of data throughout the business....
    From Intuit - Mon, 19 Jun 2017 11:09:03 GMT - View all San Diego, CA jobs
              Machine Learning Scientist - RSVP Technologies Inc. - Ontario   
    Minimum 3 years of programming experience in Java, C/C++, Python or other languages. Excellent literature survey, reading and writing skills in English....
    From RSVP Technologies Inc. - Thu, 15 Jun 2017 10:38:49 GMT - View all Ontario jobs
              Research Software Developer - Sunnybrook Health Sciences Centre - Toronto, ON   
    The successful applicant will join the Martel research group which is focused on the development of image analysis and machine learning techniques applied to...
    From Sunnybrook Health Sciences Centre - Tue, 27 Jun 2017 16:56:31 GMT - View all Toronto, ON jobs
              Microsoft to sell Box storage to Azure customers   
    Microsoft has announced a new tie-up with Box that will extend the intelligence and reach of its Azure cloud platform. Under the terms of the deal, Box will now use Azure as a strategic cloud platform, with a new "Box on Azure" now being offered out to enterprise customers around the world. However the partnership will also see Box getting the chance to use Azure’s artificial intelligence and machine learning capabilities for the first time. This could potentially soon mean that Box customers would be able to use highly advanced tools such as advanced content processing, and voice control, to power… [Continue Reading]

              Machine Learning Comes to Tour De France   
    LONDON & PARIS--(BUSINESS WIRE)--#TDF2017--Machine learning technologies at this year’s Tour de France will give cycling fans across the globe an unprecedented experience of this year’s event.
              Bioinformatics Specialist-Metagenomics/Proteomics - Signature Science, LLC - Austin, TX   
    Travel to project and business development meetings as needed. Familiarity with machine learning, Git, and agile software development is a plus;... $90,000 a year
    From Signature Science, LLC - Tue, 06 Jun 2017 09:05:50 GMT - View all Austin, TX jobs
              Becky’s Affiliated: The inspiring story behind Onfido’s machine learning innovation   

    Becky’s Affiliated: The inspiring story behind Onfido’s machine learning innovation

    The post Becky’s Affiliated: The inspiring story behind Onfido’s machine learning innovation appeared first on CalvinAyre.com.


              The Android Things Developer Preview 2 is out, adds support for Intel's Joule, brings TensorFlow for machine learning on IoT platforms, and more   

    It's been a big day from the mystical Google land. In addition to all of the Wear stuff, the team behind Android Things has released the second Developer Preview for supported Internet-of-Things platforms. It brings some new features and a few bug fixes, as well as support for the Intel Joule.

    Android Things, formerly known as Brillo, is Google's solution to a comprehensive, friendly IoT foundation upon which to build awesome products.

    Read More

    The Android Things Developer Preview 2 is out, adds support for Intel's Joule, brings TensorFlow for machine learning on IoT platforms, and more was written by the awesome team at Android Police.


              Business Continuity / Disaster Recovery Architect - Neiman Marcus - Dallas, TX   
    Red Hat Enterprise Linux, AIX. Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a...
    From Neiman Marcus - Thu, 25 May 2017 22:30:52 GMT - View all Dallas, TX jobs
              Large-Scale MOO Experiments with SHARK – Oracle Grid Engine   
    This post explains how to conduct large-scale MOO experiments with the SHARK machine learning library on clusters running Oracle grid engine. An experiment consists of three phases: front approximation performance indicator calculation result accumulation and statistics calculation Within this post, I’m going to focus on the first step. Front Approximation In this phases, the Pareto […]
              Shark 3.x – Continuous Integration   
    Taken from the SHARK website: SHARK is a modular C++ library for the design and optimization of adaptive systems. It provides methods for linear and nonlinear optimization, in particular evolutionary and gradient-based algorithms, kernel-based learning algorithms and neural networks, and various other machine learning techniques. SHARK serves as a toolbox to support real world applications […]
              NEJM This Week - June 29, 2017   
    Featuring articles on air pollution and mortality in the Medicare population, electrical direct current vs. escitalopram for depression, levothyroxine in older adults with subclinical hypothyroidism, and antibiotics for skin abscess; review articles on subclinical hypothyroidism and on the irritable bowel syndrome; a case report of a man with weight loss, confusion, and skin lesions; and Perspective articles on prospects for health care reform in the U.S. Senate, on FDA approval of valbenazine for tardive dyskinesia, on machine learning and prediction in medicine, and on decision aids and elective joint replacement.
              Chief of Staff - Castle Global - San Francisco, CA   
    On the enterprise side, our first product is commercializing our machine learning platform called Hive, which focuses on business problems that can be solved by...
    From Castle Global - Sat, 06 May 2017 09:55:44 GMT - View all San Francisco, CA jobs
              Price Drop: X Drummer   
    X Drummer
    Kategorie: Musik
    Preis: 21,99 € -> 16,99 €
    Version: 1.0.2
    in iTunes öffnen

    Beschreibung:
    LIKE PLAYING WITH A SESSION DRUMMER X Drummer will be your best songwriting partner. With a few finger taps, it quickly learns your song. You can even change the feel of each drummer depending on your personal taste. AI Drums will intelligently search through hundreds of drum patterns and drum kits and virtually match your songs. The more you play with AI Drums, the better it gets at learning your preferences. THE MOST ADVANCED AI TECHNOLOGY X Drummer lets you play, write, and rehearse with a virtual drummer, powered by the latest AI technology and deep machine learning. It works like magic; AI Drums listens to your guitar pattern and sorts out a matched drum pattern that best suits your compositions. Just tap to record and the app will learn and play the right drum pattern. CREATE YOUR DRUM TRACK IN A MINUTE Our super intuitive browser and drum track editor lets you easily search and preview drum grooves matched with your music. With a simple tap of your finger you can easily arrange and edit your drum track in a seconds. DESIGN YOUR DRUM KIT Change any and every drum component.X Drummer let’s you edit the basic parts of the kit, like the drum head and damping ring. You can go into deeper drum customization by changing the individual instrument options such as pitch, attack and resonance. In addition to designing your drum sound, you can also design the look and feel of your drum kit to make it as unique as your sound. THOUSANDS OF DRUM KITS AND PATTERNS ON THE CLOUD(...
              Millennials are ditching chains like McDonald's for different kinds of fast food   

    A meal consisting of a Quarter Pounder hamburger, french fries and soft-drink is pictured at a McDonald's restaurant in Los Angeles, California July 23, 2008. REUTERS/Fred Prouser

    The fast food business is no longer dominated by burgers-and-fries or fried chicken.

    According to a poll from our partner, MSN, Americans are more interested in pizza and Mexican food than traditional burger and chicken joints: 37 to 32% respectively. 

    MSN polls its readers, and then uses machine learning to model how a representative sample of the US would have responded, using big data, such as the Census. It's nearly as accurate as a traditional, scientific survey.

    The poll also found roughly twice as many people say their favorite burger place is Wendy's versus Burger King or McDonald's, and it's even more pronounced among younger Americans. Roughly half the country admits to eating fast food at least once a week. 

    While McDonald's and other burger chains are synonymous with fast-food for many customers, the rise of pizza and Mexican chains shouldn't come as a shock. 

    Pizza chains have been on the rise in recent years. Smaller fast-casual chains such as Blaze Pizza and PizzaRev have had explosive growth, establishing a new kind of pizza chain.

    However, perhaps the biggest winner in the pizza industry in 2017 is Domino's. In the last nine years, the chain has nearly doubled its sales, reaching $10.9 billion in 2016, compared to $5.5 billion in 2008. 

    2,000 calories dominos pizza

    Domino's, along with other big-name pizza chains Papa John's and Pizza Hut, have led the industry in two of the hottest restaurant industry trends in 2017: digital ordering and delivery. While other fast-food chains like McDonald's and Wendy's are still fine-tuning apps and rolling out delivery tests, at least half of all orders at all of the "big three" pizza chains are through digital channels. 

    Then, there's Mexican fast-food. 

    Similar to pizza, Mexican chains in the US are thriving thanks to both a plethora of smaller fast-casual chains and the success of a few big names. Chipotle — which first introduced many Americans to the Mission-style burrito — is once again on the up-and-up, and there are also a number of new comers following in the fast-causal chains' footsteps. 

    Taco Bell

    However, the power of Taco Bell cannot be ignored. While few would argue that the chain attempts authenticity, Taco Bell's twisted creations have been on fire in recent years.

    In May, parent company Yum Brands reported that Taco Bell's same-store sales grew 8% in the first quarter, with sales bolstered by the success of the chain's Naked Chicken Chalupa. Traffic grew by 5%, despite industry concerns of an incumbent restaurant recession. 

    Traditional burger chains and fried chicken joints are unlikely to die out any time soon. Fried chicken especially has been on a hot streak, with the growth of chain's like Chick-fil-A and non-chicken brands like Taco Bell and Burger King adding more fried chicken to the menu.

    However, with millennials' appreciation for fast food that goes beyond burgers and fries, it's clear that McDonald's may not be the king of fast-food forever. 

    Join the conversation about this story »

    NOW WATCH: LEAKED VIDEO: Former Starbucks CEO Howard Schultz tells workers Trump is 'creating chaos' that's affecting the economy


              This ex-tech CEO raised $10 million for a trendy grilled cheese shop — but it hasn't worked out like he planned   

    The MeltIn the spring of 2011, more than 500 tech luminaries, kingmakers, entrepreneurs, and journalists convened in southern California for the D: All Things Digital Conference. The sold-out event promised “digital disruption out the wazoo,” and a crowd had shelled out $4,795 a head for a lineup of heavyweights the likes of Eric Schmidt, Reed Hastings, and Marc Andreessen.

    Waiting in the wings was a smaller, but still recognizable name: Jonathan Kaplan, one of Silicon Valley’s prodigal sons with a moonshot of a second act. The founder of Pure Digital Technologies, a maker of camera and video recorders, Kaplan had first walked onto the All Things D stage six years earlier to debut his Flip video camera. The Flip quickly became a consumer favorite; four years post-debut, Kaplan sold his company to Cisco for $590 million. But true to the entrepreneur trajectory, Kaplan found the stability of a large company stultifying. He wanted to change the world one more time.

    His new project, Kaplan teased, was “founded on the exact same fundamentals” as the Flip. After a few warm-up questions from his interviewers, tech journalists Kara Swisher and Walt Mossberg, Kaplan revealed the cutting-edge creation he was poised to unleash: grilled cheese sandwiches. Five different kinds of them, in fact. Featuring not only cheddar, but also fontina, gruyere, and jalapeño jack.

    The new company was called The Melt. Its motto seemed cribbed from the clumsy English slogans sometimes featured on Asian T-shirts: “Grilled Cheese Happiness.” The sandwiches formed a minimalist menu, accompanied only by soup. “It turns out when you put soup and grilled cheese together, it’s really wonderful,” Kaplan informed his audience, as if divulging a trade secret.

    Forget Mars colonies and AI. Kaplan declared he had “developed a set of technology that allows us to make the perfect grilled cheese.” The innovation was as meaningful as it was miraculous: They had “that nostalgic thing,” Kaplan explained. Grilled cheese sandwiches were the fast food equivalent of Proust’s madeleines, priming them for disruption.

    Swisher and Mossberg openly smirked. “I feel like this is post-traumatic stress from Cisco,” said Swisher. “I think he went home and looked at his money,” Mossberg deadpanned.

    Kaplan was unfazed. Armed with a tech founder’s unflappable confidence and ambitious growth targets, he announced plans to open 500 fast-casual outlets within five years — all of them company-owned, not franchised. Never mind that it had taken Chipotle three times as long to hit that milestone using the same model. Kaplan felt confident in his melted-cheese rocket ship.

    The Melt boasted an elite group of investors — including Sequoia Capital, better known for its bets on Instagram and YouTube — and enough cash to launch twenty restaurants, at a cost of $500,000 to $1 million apiece. Kaplan had recruited some of the Bay Area’s top names, including Michelin-starred chef Michael Mina and former Apple executive Ron Johnson, the genius behind the tech giant’s retail stores.

    Kaplan had recruited some of the Bay Area’s top names, including Michelin-starred chef Michael Mina and former Apple executive Ron Johnson, the genius behind the tech giant’s retail stores.

    With the home appliance company Electrolux, he’d created a device that delivered a restaurant-quality sandwich in 45 seconds flat—a “huge breakthrough” in sandwich technology. (“Sandwich presses have been around forever,” protested a skeptical Mossberg. “Not a sandwich press!” Kaplan retorted. “This is two induction burners! Microwaves! Silpats!”)

    Next month marks the six-year anniversary of The Melt’s onstage debut. Far from 500 stores, it now runs a grand total of 18 outlets. In the years since it first opened shop, The Melt has grown in fits and starts — launching, then dismantling, a fleet of food trucks, for example. Last September, Kaplan was replaced as CEO by Ralph Bower, a restaurant industry executive with more than 25 years of experience at companies like Domino’s Pizza and KFC.

    Falling short of its 500 restaurant goal hardly qualifies The Melt as a flop. Shake Shack, the burger chain founded by longtime restaurateur Danny Meyer’s Union Square Hospitality Group, took 13 years to reach one hundred outlets and is now worth over $1 billion. But former employees at The Melt, ranging from the top echelons of the company to in-store crew members, tell a complicated story of a company that had to roll out sweeping changes to its initial model after overestimating the competitive advantage of its technology — which proved to be both a source of strength and, at times, a liability.

    The Melt’s blundering trajectory is instructive, as Silicon Valley wunderkinds seek to infuse everyday objects with help from algorithms and apps. Entrepreneurs frequently embark on these missions with vast sums of money and a deep belief in technology’s power to solve all problems — which is not always a formula for success in the brick-and-mortar business of ordinary life: delivering groceries, selling luggage, or making sandwiches.

    “Don’t let the fact that it’s just grilled cheese fool you,” said a former senior leader at The Melt, speaking on condition of anonymity. Making a grilled cheese “in 45 seconds, and doing it perfectly, and doing it profitably: That ain’t easy. It’s harder than even we thought it was going to be — and certainly harder than a lot of smart money thought it was going to be.”

    Making a grilled cheese “in 45 seconds, and doing it perfectly, and doing it profitably: That ain’t easy. It’s harder than even we thought it was going to be — and certainly harder than a lot of smart money thought it was going to be.”

    When The Melt opened its first restaurant in the summer of 2011, its offerings were comfort food to the core. A minimalist menu of five soup-and-sandwich combinations included “The Classic” (cheddar on potato bread with tomato soup) and, on the more exotic end, “The Wild Thing” (gruyere on white wheat bread with mushroom soup). The Melt prioritized all-natural ingredients, offered Boylan soda in place of Pepsi or Coke, and featured tug-at-your-heartstrings sides — think Cracker Jacks and warm chocolate cookies.

    Though the all-American fare was selected to evoke nostalgia, the chain’s kitchens boasted cutting-edge equipment meant to eliminate uncertainty from preparing a sandwich and churn large numbers of customers through its doors. “It’s been created in a way that a $10-per-hour worker can make a high-end restaurant-quality sandwich,” Kaplan boasted at All Things Digital. Forget griddles and guessing: The Melt’s Electrolux presses — which staffers nicknamed “WALL-Es,” after the robot in Pixar’s film — relied on what Kaplan called “proprietary” software and hardware, which decreased the strength of microwave to “increase the quality of the grilled cheese that’s being made.” The Melt also developed equipment for cooking burgers that controlled the heat, time, and pressure through software—“much closer to the way a fryer, microwave, or convection oven that had a timer on it would work” than how an experienced chef would deduce cooking times, Kaplan explained.

    Other tech wizardry was more visible to The Melt’s guests. An online ordering system let customers skip lines by buying their meals in advance. Scanners allowed diners to swipe a QR code and activate those orders without ever speaking to a human. With input from NASA consultants, the company engineered (and patented) a “Smart Box” that could keep French fries crispy and melted cheese gooey, even an hour after being made

    With input from NASA consultants, the company engineered (and patented) a “Smart Box” that could keep French fries crispy and melted cheese gooey, even an hour after being made

    . Thanks to software and hardware that regulated humidity, heat, and air circulation in these mobile units, The Melt could cater offsite without sacrificing food quality. The Melt continually added tech-enabled perks : ordering kiosks, app-based geo-fencing that kickstarted food prep as customers approached the restaurant, in-store soundtracks that changed songs according guests’ musical preferences, and an app-based loyalty program.

    In short, like many entrepreneurs, Kaplan harnessed software and hardware to tackle the critical problem of his own satisfaction. “Today when I want my burrito to be fresh and hot, I have to go in and wait on a long line to get my hot fresh burrito,” he said. In Kaplan’s mind, that merited action: “We needed to reinvent the whole fast casual restaurant business.”

    Startups pride themselves on naïve ignorance that allows them to rethink traditional industries — think Airbnb going up against hotels, or Uber taking on taxi unions. It’s easier to “move fast and break things” when you haven’t yet been indoctrinated into the thing you’re trying to break. Kaplan himself noted that his lack of experience with video recorders had proved an asset when creating the Flip cam; logically, the sentiment applied to his new business. “I didn’t know anything about the video camera ten years ago,” he boasted in a 2011 interview with Forbes, “and I don’t know that much about grilled cheese sandwiches or soup.”

    But it didn’t take a grilled cheese savant to taste that in practice, The Melt’s “breakthrough” cooking technology fell short of its promise. Staffers found the Electrolux machines temperamental, and the kitchen’s focus on efficiency and speed came at the expense of quality and flavor. The 45-second sandwich lacked a soul

    Staffers found the Electrolux machines temperamental, and the kitchen’s focus on efficiency and speed came at the expense of quality and flavor. The 45-second sandwich lacked a soul

    . “It’s a grilled cheese, all right, but a sterile one,” wrote SF Weekly restaurant critic Jonathan Kauffman. “[T]here are no compressed spatula marks in the bread, no globs of cheese that have escaped the bread to crisp on the griddle.”

    San Francisco Yelpers raved about getting the “best grilled cheese ever” and grumbled about greasy bread, inattentive staff, overpriced sandwiches, and minuscule portions. “Silicon Valley money, high profile concept, lots of hype, plans to expand everywhere. But they forgot to make the food taste good,” wrote an online poster.

    And from the beginning, Kaplan clashed with his staff about The Melt’s pared-down offerings. “I’ve already had many, many fights with my team about adding all kinds of things to the menu,” Kaplan said in an interview shortly after The Melt opened its doors. “What’s amazing to me is it reminded me of the early days of the Flip. Everyone wanted to add another feature to the Flip — let’s make it a phone, let’s make it do this, let’s make it do that.”

    Yet the the Melt’s purist menu had a flaw: Diners don’t usually eat sandwiches for dinner. Many Melt outlets were chaos at lunchtime and crickets at night. At first, The Melt tried to boost business by launching a breakfast menu in 2012. Two years later, breakfast was out, and The Melt tried again with another menu, featuring burgers and other hearty fare. It stuck. From the original selection of five soup-and-sandwich combos, The Melt now offers six different grilled cheeses, four cheeseburgers, three chicken sandwiches, three French fries, two salads, two mac n’ cheese dishes, four desserts, and only a single soup.

    The new dishes attracted customers; adding meat entrées yielded an impressive “25 or 30 percent increase in sales overnight,” according to Bower. But the menu overhauls proved challenging. Introducing — then dismantling — a breakfast menu is more complex than, say, a software update. Ditto for burgers: As several ex-Melt employees told me, the push into meat required remodeling stores to accommodate new equipment, re-training crews to handle new processes, sourcing different ingredients, and, in some cases, closing and moving existing stores. (Some food courts have rules in place to ensure tenants do not compete in the range of foods they sell.)

    Amid these changes, former employees describe growing pains. It was tough to retain talented workers, train them, and simultaneously keep customer service and satisfaction high. “We’re not making major changes like that anymore. But we had to redesign the kitchens, we had to get different equipment, so, yeah, lots of operating issues,” said Melt investor Michael Marks, a founding partner at the private equity firm Riverwood Capital. (“We haven’t had operational issues,” Kaplan told me, when asked about this period.)

    To Kaplan, the public’s anemic appetite for grilled cheese at dinnertime was not a foreseeable issue. “No one had ever tried to open up grilled cheese restaurants across the country,” he told me. But ex-Melt staffers, as well as an investor, raised concerns that the company might have benefited from a leadership team with greater experience in fast-casual restaurants — which might have allowed them to predict this challenge.

    “They were all good people, and they all wanted good things. They just didn’t know anything about running restaurants, and they ran it like a tech company instead of like a restaurant,” said one employee who worked in a Melt restaurant. Kaplan counters that The Melt was “always a combination of restaurant people and technology folks.” Yet at the top level, The Melt’s board was — and remains — dominated by tech industry veterans: Kaplan, the former Apple executive Ron Johnson, and the venture capitalists Mike Moritz, Michael Marks, and Bruce Dunlevie. Chef Michael Mina, whose restaurant empire has traditionally hewed closer to tablecloth-and-valet-type destinations than Chop’t, is a notable exception.

    The Melt has lagged behind its peers in the fast-casual business, especially as competing outlets have wizened to the benefits of integrating technology. Chains like Starbucks copied many features that originally distinguished The Melt, such as its digital loyalty program. Though The Melt may not have behaved enough like a restaurant, other restaurants have begun behaving like tech companies, eroding the startup edge that had initially given The Melt an advantage

    Though The Melt may not have behaved enough like a restaurant, other restaurants have begun behaving like tech companies, eroding the startup edge that had initially given The Melt an advantage

    .

    Though the restaurant has fallen far short of its goal of opening hundreds of locations, Kaplan places the blame squarely on external forces: “Real estate, real estate, real estate,” he told me. “Our ability to get as much real estate as we wanted, as quickly as we wanted, was limited.” And while yes, the headaches of real estate in the Bay Area—where The Melt first set roots—are well established, fingering real estate is a bit like pointing out that it gets cold every winter.

    “[T]echnology was the promise, and it also may have been the Achilles’ heel,” said a former Melt employee, who declined to be named. “That’s where the arrogance was: We’ve got all this money, we’ve had success in our individual careers in the past, so we can’t get it wrong.”

    The ex staffer told me the experience had been humbling. Given the chance to do it over, “we should have been spending a lot more time on the food, the customer experience, the management, and the operations.”

    The MeltRecently, Melt has undergone yet another makeover: “Our number one focus is going to be on the food,” The Melt’s new CEO, Ralph Bower, told me. (Which begs the question: What was the number one focus before?). The restaurants have received a handful of tech upgrades: second-generation sandwich presses—this time from Nemco, not Electrolux—screens in kitchens to streamline orders, additional order kiosks, and more TVs in the dining areas that display information on the integrity of the Melt’s ingredients.

    But the more dramatic changes have centered on the old-fashioned business of making good food and courting diners. The Melt’s décor has gotten a facelift, its bright-white subway tiles and metal stools have been traded for minimalist furniture and bleached wood, which lend a warmer feel. Bower is introducing a rotating menu of seasonal specials, in an effort to “romance the food.”

    And internally, The Melt’s mission statement has changed. “The Melt was founded on the idea of ‘better food for our kids, and jobs creation,’” a publicist for The Melt wrote in an email. “While this remains core to what the company does today, the team recently updated and refocused The Melt’s mission statement.” The Melt’s new mission statement? “We consistently provide craveable grilled cheese and cheeseburgers handcrafted by friendly crew members using the best all natural ingredients enabled by helpful technology and served in a warm, welcoming environment.”

    The Melt’s revamped mission is telling. Before, it envisioned itself tackling ambitious and systemic world problems, much as a tech company would. Now, its goals are individualistic and basic: delivering delicious sandwiches to customers

    Now, its goals are individualistic and basic: delivering delicious sandwiches to customers

    . In short, it sounds like a restaurant. And technology has been reduced to a supporting role; The Melt’s tech should be “helpful,” just as its décor should be “welcoming” and its staff “friendly.”

    At a time when tech is rushing into new realms, promising solutions to problems that may or may not exist beyond a pitch deck, this shift is revealing and significant. As The Melt discovered, that there are certain human needs that are still best satisfied by the high-touch, not the high-tech. Though processes may be disrupted, changing our desires — especially for something as instinctively pleasurable as grilled cheese — can be far more difficult.

    “I think if you’re looking for the angle of, like, what went wrong, I would say that nothing went wrong,” Kaplan told me when we last spoke. “But what we did learn is that the quality of the food is the most important reason why someone comes to a restaurant.”

    More from Backchannel:

    The Insider Story of BitTorrent’s Bizarre Collapse

    How Technology Led a Hospital to Give a Patient 38 Times His Dosage

    The Internet Really Has Changed Everything. Here’s Proof.

    An Exclusive Look at How AI and Machine Learning Work at Apple

     

    SEE ALSO: How the web became unreadable

    Join the conversation about this story »

    NOW WATCH: JIM ROGERS: The worst crash in our lifetime is coming


              Comment on An AI Ophthalmologist Shows How Machine Learning May Transform Medicine by EMNYENNZE   
    VERY PROMISING TECHNOLOGY FOR DEVELOPING COUNTRIES
              The Ultimate Data Infrastructure Architect Bundle for $36   
    From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
    Expires June 01, 2022 23:59 PST
    Buy now and get 94% off

    Learning ElasticSearch 5.0


    KEY FEATURES

    Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

    • Access 35 lectures & 3 hours of content 24/7
    • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
    • Add more power to your searches using filters, ranges, & more
    • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
    • Build, test, & run your first LogStash pipeline to analyze Apache web logs

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

    Apache Spark 2 for Beginners


    KEY FEATURES

    Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

    • Access 45 lectures & 5.5 hours of content 24/7
    • Learn the Spark programming model through real-world examples
    • Explore Spark SQL programming w/ DataFrames
    • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
    • Discuss Spark's stream processing, machine learning, & graph processing libraries
    • Develop a real-world Spark application

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

    Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

    When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

    Designing AWS Environments


    KEY FEATURES

    Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

    • Access 19 lectures & 2 hours of content 24/7
    • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
    • Explore the available options for virtual instances & demonstrate launching & connecting to them
    • Design & deploy networking & hosting solutions for large deployments
    • Focus on security & important elements of scalability & high availability

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

    Learning MongoDB


    KEY FEATURES

    Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

    • Access 64 lectures & 40 hours of content 24/7
    • Master data management, queries, post processing, & essential enterprise redundancy requirements
    • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
    • Delve into SSL security & programmatic access using various languages
    • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

    Learning Hadoop 2


    KEY FEATURES

    Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

    • Access 19 lectures & 1.5 hours of content 24/7
    • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
    • Install & configure a Hadoop environment
    • Explore Hue, the graphical user interface of Hadoop
    • Discover HDFS to import & export data, both manually & automatically
    • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
    • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

    ElasticSearch 5.x Cookbook eBook


    KEY FEATURES

    ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

    • Access 696 pages of content 24/7
    • Perform index mapping, aggregation, & scripting
    • Explore the modules of Cluster & Node monitoring
    • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
    • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

    In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

    In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

    Fast Data Processing with Spark 2 eBook


    KEY FEATURES

    Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

    • Access 274 pages of content 24/7
    • Get to grips w/ some simple APIs before investigating machine learning & graph processing
    • Learn how to use the Spark shell
    • Load data & build & run your own Spark applications
    • Discover how to manipulate RDD
    • Understand useful machine learning algorithms w/ the help of Spark MLlib & R

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

    MongoDB Cookbook: Second Edition eBook


    KEY FEATURES

    MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

    • Access 274 pages of content 24/7
    • Initialize the server in three different modes w/ various configurations
    • Get introduced to programming language drivers in Java & Python
    • Learn advanced query operations, monitoring, & backup using MMS
    • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

    Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

    Learning Apache Kafka: Second Edition eBook


    KEY FEATURES

    Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

    • Access 120 pages of content 24/7
    • Set up Kafka clusters
    • Understand basic blocks like producer, broker, & consumer blocks
    • Explore additional settings & configuration changes to achieve more complex goals
    • Learn how Kafka is designed internally & what configurations make it most effective
    • Discover how Kafka works w/ other tools like Hadoop, Storm, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

    He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

    Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

    Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


    KEY FEATURES

    Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

    • Access 178 pages of content 24/7
    • Explore channels, sinks, & sink processors
    • Learn about sources & channels
    • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (http://orbitz.com/).

              The Coding Powerhouse eBook Bundle for $29   
    Here's a 9-Book Digital Library to Be Your Reference For Everything From Web Development to Software Engineering
    Expires July 13, 2018 23:59 PST
    Buy now and get 91% off

    Learning Angular 2


    KEY FEATURES

    Angular 2 was conceived as a complete rewrite in order to fulfill the expectations of modern developers who demand blazing fast performance and responsiveness from their web applications. This book will help you learn the basics of how to design and build Angular 2 components, providing full coverage of the TypeScript syntax required to follow the examples included.

    • Access 352 pages of content 24/7
    • Set up your working environment to have all the tools you need to start building Angular 2 components w/ minimum effort
    • Get up to speed w/ TypeScript, a powerful typed superset of JavaScript that compiles to plain JavaScript
    • Take full control of how your data is rendered & updated upon data changes
    • Build powerful web applications based on structured component hierarchies that emit & listen to events & data changes throughout the elements tree
    • Explore how to consume external APIs & data services & allow data editing by harnessing the power of web forms made with Angular 2
    • Deliver seamless web navigation experiences w/ application routing & state handling common features w/ ease
    • Discover how to bulletproof your applications by introducing smart unit testing techniques & debugging tools

      PRODUCT SPECS

      Details & Requirements

      • Length of time users can access this course: lifetime
      • Access options: web streaming, mobile streaming
      • Certification of completion not included
      • Redemption deadline: redeem your code within 30 days of purchase
      • Experience level required: all levels

      Compatibility

      • Internet required

      THE EXPERT

      Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Java Deep Learning Essentials


    KEY FEATURES

    AI and Deep Learning are transforming the way we understand software, making computers more intelligent than we could even imagine just a decade ago. Starting with an introduction to basic machine learning algorithms, this course takes you further into this vital world of stunning predictive insights and remarkable machine intelligence.

    • Access 254 pages of content 24/7
    • Get a practical deep dive into machine learning & deep learning algorithms
    • Implement machine learning algorithms related to deep learning
    • Explore neural networks using some of the most popular Deep Learning frameworks
    • Dive into Deep Belief Nets & Stacked Denoising Autoencoders algorithms
    • Discover more deep learning algorithms w/ Dropout & Convolutional Neural Networks
    • Gain an insight into the deep learning library DL4J & its practical uses
    • Get to know device strategies to use deep learning algorithms & libraries in the real world
    • Explore deep learning further w/ Theano & Caffe

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Python


    KEY FEATURES

    Python is a dynamic programming language known for its high readability, hence it is often the first language learned by new programmers. Being multi-paradigm, it can be used to achieve the same thing in different ways and it is compatible across different platforms. This book is an authoritative guide that will help you learn new advanced methods in a clear and contextualized way.

    • Access 486 pages of content 24/7
    • Create a virtualenv & start a new project
    • Understand how & when to use the functional programming paradigm
    • Get familiar w/ the different ways the decorators can be written in
    • Understand the power of generators & coroutines without digressing into lambda calculus
    • Generate HTML documentation out of documents & code using Sphinx
    • Learn how to track & optimize application performance, both memory & cpu
    • Use the multiprocessing library, not just locally but also across multiple machines

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering React


    KEY FEATURES

    React stands out in the web framework crowd through its approach to composition which yields blazingly fast rendering capabilities. This book will help you understand what makes React special. It starts with the fundamentals and uses a pragmatic approach, focusing on clear development goals. You'll learn how to combine many web technologies surrounding React into a complete set for constructing a modern web application.

    • Access 254 pages of content 24/7
    • Understand the React component lifecycle & core concepts such as props & states
    • Craft forms & implement form validation patterns using React
    • Explore the anatomy of a modern single-page web application
    • Develop an approach for choosing & combining web technologies without being paralyzed by the options available
    • Create a complete single-page application
    • Start coding w/ a plan using an application design process
    • Add to your arsenal of prototyping techniques & tools
    • Make your React application feel great using animations

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering JavaScript


    KEY FEATURES

    JavaScript is the browser language that supports object-oriented, imperative, and functional programming styles, focusing on website behavior. JavaScript provides web developers with the knowledge to program more intelligently and idiomatically—and this course will help you explore the best practices for building an original, functional, and useful cross-platform library. At course's end, you'll be equipped with all the knowledge, tips, and hacks you need to stand out in the advanced world of web development.

    • Access 250 pages of content 24/7
    • Get a run through of the basic JavaScript language constructs
    • Familiarize yourself w/ the Functions & Closures of JavaScript
    • Explore Regular Expressions in JavaScript
    • Code using the powerful object-oriented feature in JavaScript
    • Test & debug your code using JavaScript strategies
    • Master DOM manipulation, cross-browser strategies, & ES6
    • Understand the basic concurrency constructs in JavaScript & best performance strategies
    • Learn to build scalable server applications in JavaScript using Node.js

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Git


    KEY FEATURES

    A powerful source control management program, Git will allow you to track changes and revert to any previous versions of your code, helping you implement an efficient, effective workflow. With this course, you'll master everything from setting up your Git environment, to writing clean code using the Reset and Revert features, to ultimately understanding the entire Git workflow from start to finish.

    • Access 418 pages of content 24/7
    • Explore project history, find revisions using different criteria, & filter & format how history looks
    • Manage your working directory & staging area for commits & interactively create new revisions & amend them
    • Set up repositories & branches for collaboration
    • Submit your own contributions & integrate contributions from other developers via merging or rebasing
    • Customize Git behavior system-wide, on a per-user, per-repository, & per-file basis
    • Take up the administration & set up of Git repositories, configure access, find & recover from repository errors, & perform repository maintenance
    • Choose a workflow & configure & set up support for the chosen workflow

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Xamarin Cross-Platform Development Cookbook


    KEY FEATURES

    The Xamarin Forms platform lets you create native mobile applications for iOS, Android, and Windows Phone all at the same time. With Xamarin you can share large amounts of code, such as the UI, business logic, data models, SQLite data access, HTTP data access, and file storage across all three platforms. That is a huge consolidation of time. This book provides recipes on how to create an architecture that will be maintainable and extendable.

    • Access 416 pages of content 24/7
    • Create & customize your cross-platform UI
    • Understand & explore cross-platform patterns & practices
    • Use the out-of-the-box services to support third-party libraries
    • Find out how to get feedback while your application is used by your users
    • Bind collections to ListView & customize its appearance w/ custom cells
    • Create shared data access using a local SQLite database & a REST service
    • Test & monitor your applications

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Swift 3 Functional Programming


    KEY FEATURES

    Whether you're new to functional programming and Swift or experienced, this book will strengthen the skills you need to design and develop high-quality, scalable, and efficient applications. Based on the Swift 3 Developer preview version, it focuses on simplifying functional programming (FP) paradigms to solve many day-to-day development problems.

    • Access 296 pages of content 24/7
    • Learn first-class, higher-order, & pure functions
    • Explore closures & capturing values
    • Understand value & reference types
    • Discuss enumerations, algebraic data types, patterns, & pattern matching
    • Combine FP paradigms w/ OOP, FRP, & POP in your day-to-day development activities
    • Develop a back end application w/ Swift

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Scala High Performance Programming


    KEY FEATURES

    Scala is a statically and strongly typed language that blends functional and object-oriented paradigms. It has grown in popularity as an appealing and pragmatic choice to write production-ready software in the functional paradigm, enabling you to solve problems with less code and lower maintenance costs than alternative. This book arms you with the knowledge you need to create performant Scala applications, starting with the basics.

    • Access 274 pages of content 24/7
    • Analyze the performance of JVM applications by developing JMH benchmarks & profiling with Flight Recorder
    • Discover use cases & performance tradeoffs of Scala language features, & eager & lazy collections
    • Explore event sourcing to improve performance while working w/ stream processing pipelines
    • Dive into asynchronous programming to extract performance on multicore systems using Scala Future & Scalaz Task
    • Design distributed systems w/ conflict-free replicated data types (CRDTs) to take advantage of eventual consistency without synchronization
    • Understand the impact of queues on system performance & apply the free monad to build systems robust to high levels of throughput

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt Publishing’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

              The Perfect Python Programming Bundle: Lifetime Access for $24   
    Learn to Code in This Valuable Language Today & Watch the Career Doors Open Tomorrow
    Expires September 10, 2017 23:59 PST
    Buy now and get 97% off

    Introduction to Programming & Coding for Everyone with JavaScript


    KEY FEATURES

    Get your coding odyssey started with this beginner's course in the world's most popular programming language, JavaScript. Over this course you'll gain solid general programming foundations, while exploring JavaScript without having to download any special software. By course's end, you'll be on great footing to advance to more complicated subject matter.

    • Access 10 lectures & 3 hours of content 24/7
    • Write programs in JavaScript to display output messages
    • Prompt for input using JavaScript
    • Use variables to store information
    • Build programs to make decisions & repeat a sequence of operations

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: Lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Python 3 Programming Essentials


    KEY FEATURES

    Python is a general purpose programming language that was explicitly designed to be highly readable, making it one of the ideal languages for beginner programmers. Despite the relative ease to learn, Python is an extremely powerful language, used in the creation of YouTube, Instagram, and Reddit, and commonly seen in machine learning, as well. Because Python has so many different uses, there are always many Python programming jobs available that pay very well. After this course, you'll be well on your way to breaking into one of those jobs.

    • Learn the basic structure of Python including Python's data types & control flow constructs
    • Write, debug & execute Python programs
    • Describe & work w/ nested data types & exception handling
    • Understand the basics of modules & object oriented programming in Python
    • Use Python to work w/ files & the operating system

    PRODUCT SPECS

    • Length of time users can access this course: Lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Advanced Python 3 Programming


    KEY FEATURES

    While it is one of the easier programming languages to learn, Python has such broad functionality that programs can rapidly escalate from beginner to advanced. This course confronts many of the advanced features of Python so you can enhance your Python knowledge and be eligible for more complicates, better-paying programming jobs. If you want to work in any programming field, you're going to want to take this course.

    • Write Python programs using complex data types
    • Create object oriented programs in Python
    • Use Python to create GUI programs
    • Master regular expressions & threads in Python programs
    • Build network programs in Python
    • Work w/ SQL databases
    • Extend Python programs w/ C code

    PRODUCT SPECS

    • Length of time users can access this course: Lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Fundamentals of Operating Systems


    KEY FEATURES

    Operating systems are the backbone of any computer device, from laptops to smartphones. As such, it is essential for any aspiring programmer to understand how operating systems work, which is exactly what this course aims to do. Over these six hours, you'll discuss the various functions of operating systems and the interrelationships of those functions. Plus, you'll receive a companion textbook to get more detailed info whenever you need it.

    • Access 17 lectures & 6 hours of content 24/7
    • Explain the overall objectives & structure of any modern operating system
    • Identify differences & similarities between operating systems
    • Describe how the functions within an operating system work together
    • Understand the causes of many operating system crashes & errors
    • Choose which operating system best suits individual situations

      PRODUCT SPECS

      Details & Requirements

      • Length of time users can access this course: Lifetime
      • Access options: web streaming, mobile streaming
      • Certification of completion included
      • Redemption deadline: redeem your code within 30 days of purchase
      • Experience level required: all levels

      Compatibility

      • Internet required

      THE EXPERT

      At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

              The Complete Python Programming Bundle for $79   
    Take a Deep Dive Into a Wide Range of Python's Many Capabilities
    Expires May 05, 2025 23:59 PST
    Buy now and get 93% off

    Python Programming for Beginners


    KEY FEATURES

    Designed for beginners, this comprehensive Python course will introduce you to this general-purpose programming language that many professionals consider one of the best first languages to learn. Even if you've never written a line of code in your life, you'll learn how to build a complete program from scratch in this language used by tech giants like Google, Pinterest, and Instagram.

    • Access 4 hours of course content 24/7
    • Understand the Python installation process & learn about variables, loops, & statements
    • Master function parameters, variables & common errors
    • Verify your knowledge w/ practical exercises
    • Build your own Python program from scratch

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all level
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts
    Compatibility:
    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Learn Python Django from Scratch


    KEY FEATURES

    If you want to get serious about web development, you need to know Python Django. In this course you'll create your own sophisticated website from scratch using Django and incorporate an authentication system, ecommerce with PayPal and Stripe, geolocation, map integration, and web services. Through this example-driven course, you'll gain a nuanced understanding of how to build sites with Django.

    • Access 52 lectures & 6.5 hours of content 24/7
    • Understand how Django creates web apps, specifically software normally backed up by a database
    • Learn how Django's framework makes building database-driven websites easier
    • Explore PyCharm, a smart code editor that supports Python, JavaScript, CSS, & more
    • Work w/ Git, the world's largest free & open source version control system

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all level
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Python Game Development: Create a Flappy Bird Clone


    KEY FEATURES

    Python is one of the most versatile and commonly used programming languages in the world. It's also generally considered one of the easiest to learn, which is why this course is so valuable. You'll learn Python programming while building your very own clone of the oddly addictive mobile game, Flappy Bird. Learning coding can be fun, as you're about to find out!

    • Access 9 units of study 24/7
    • Gain practical experience w/ Python game development using Python programming concepts & initial coding
    • Understand input controls, boundaries, crash events, & menu creation
    • Add game elements like logic, score display, difficulty levels & more
    • Enjoy playing the Flappy Bird clone you've created & share it w/ friends
    • Experience the game development process
    • Use your new skills as a gateway to a future in game development

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Python Web Programming


    KEY FEATURES

    Python is one of the most popular coding languages and is a favorite of some of the web's biggest giants. It's designed with accessibility, simplicity, and versatility in mind, and is commonly used in everything from machine learning to web development. In this course, you'll learn how to optimize Python's web programming capabilities.

    • Access 57 lectures 24/7
    • Discuss the concept of Object Oriented Programming
    • Learn SQLite w/ Python
    • Insert dynamic data, read data, & update data w/ SQLite
    • Boost your resume & open up a new realm of career opportunities

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Python Object Oriented Programming Fundamentals


    KEY FEATURES

    Object-Oriented Programming is a programming language model organized objects rather than "actions" and data rather than logic. This discipline can be used to create advanced and easy maintainable Python applications, and this course will teach you how to do just that. By learning the most up to date tools and techniques you'll be ready to build applications fast and deploy them with extreme efficiency.

    • Access 7 units of study 24/7
    • Build on existing Python expertise to learn how to create both easy & advanced maintainable Python applications
    • Become confident in the new approach to programming expected from most employers
    • Create newly featured Python applications
    • Cover class construct, the special_init_method, attributes, methods, class variables, & more
    • Explore how to create an object, obtain object attributes, change object attribute values & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • No software included
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Data Analysis with Python and Pandas


    KEY FEATURES

    Data is everywhere and companies are constantly gathering information on consumers to make better informed business decisions. As such, skilled data experts are in demand to help interpret data. In this course, you'll learn how to analyze data, manipulate data sets, and master data mining in Python. By course's end, you'll have a coveted skill set that will help you score high-paying jobs.

    • Access 51 lectures & 6 hours of content 24/7
    • Learn the fundamentals of Pandas, the library of data structures you can use in conjunction w/ Python
    • Run data manipulation, logical categorizing, statistical functions & more in conjunction w/ Pandas
    • Work w/ missing data, combine data, & tackle advanced operations like resampling, correlation, & mapping
    • Explore the NumPy library of high level mathematical functions

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

    Data Visualisation with Python and Matplotlib


    KEY FEATURES

    Data visualization is a skill set that is in high-demand from businesses in all industries, and this course will enable you to jump on that wave. Focusing on Python 3 in conjunction with Matplotlib, you'll learn how to translate data into easy-to-read, accessible charts and graphs, that companies can benefit from.

    • Access 58 lectures & 7 hours of content 24/7
    • Learn Python 3 & Matplotlib
    • Discover how to visualize multiple forms of graphs in both 2D & 3D
    • Load & organize data from various sources for visualization
    • Create live graphs & learn how to customize them
    • Master basic functions like labels, titles, window buttons, & legends
    • Explore advanced features like customized spines, styles, annotations, averages, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: 1 year
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • Free technical support available 24/5 via email, telephone and online chat
    • Limit: 1 for you, unlimited as gifts

    Compatibility

    • Mac
    • Windows 7 or later
    • Android
    • Browser Supported: Internet Explorer 8 or later, Google Chrome, Safari 6 or later, Mozilla Firefox
    Note: If using Apple Safari, you must change your preferences. For more information, click here.

    THE EXPERT

    Vizualcoaching is an institution of passionate and talented educationists who support over 300,000 students all over the world. The institution consists of over 180 individuals all specialising in their own aspects of combining education with technology. For more details on this course and instructor, click here.

              The Complete Programming Language Bootcamp for $36   
    96+ Hours to Learn Over 8 Programming Languages by Example
    Expires May 23, 2022 23:59 PST
    Buy now and get 91% off

    Learn By Example: Scala


    KEY FEATURES

    The best way to learn is by example, and in this course you'll get the lowdown on Scala with 65 comprehensive, hands-on examples. Scala is a general-purpose programming language that is highly scalable, making it incredibly useful in building programs. Over this immersive course, you'll explore just how Scala can help your programming skill set, and how you can set yourself apart from other programmers by knowing this efficient tool.

    • Access 67 lectures & 6.5 hours of content 24/7
    • Use Scala w/ an intermediate level of proficiency
    • Read & understand Scala programs, including those w/ highly functional forms
    • Identify the similarities & differences between Java & Scala to use each to their advantages

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    From 0 to 1: Learn Java Programming


    KEY FEATURES

    Java is one of the world's leading programming languages because its applications are virtually endless. Whether you've never coded before or you have experience with other languages and want to extend to Java, this large course will take you from beginner to an early intermediate level. Programming should be as fun as it is useful, and this course takes that mantra seriously!

    • Access 84 lectures & 17 hours of content 24/7
    • Create a daily stock quote summarizer to output data in an Excel spreadsheet
    • Build a news curation app to summarize newspaper articles into a concise email snippet
    • Get support w/ choosing a programming environment & downloading & setting up IntelliJ
    • Learn simple hello-world style programs in functional, imperative & object-oriented paradigms
    • Understand how to use maps, lists & arrays

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    From 0 to 1: Learn Python Programming


    KEY FEATURES

    Python's one of the easiest yet most powerful programming languages you can learn, and it's proven its utility at top companies like Dropbox and Pinterest. In this quick and dirty course, you'll learn to write clean, efficient Python code, learning to expedite your workflow by automating manual work, implementing machine learning techniques, and much more.

    • Dive into Python w/ 10.5 hours of content
    • Acquire the database knowledge you need to effectively manipulate data
    • Eliminate manual work by creating auto-generating spreadsheets w/ xlsxwriter
    • Master machine learning techniques like sk-learn
    • Utilize tools for text processing, including nltk
    • Learn how to scrape websites like the NYTimes & Washington Post using Beautiful Soup
    • Complete drills to consolidate your newly acquired knowledge

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: C++ Programming - 75 Solved Problems


    KEY FEATURES

    C++ seems intimidating, not least because it looks like the best grade anyone can get in a class on the topic. This course will show you otherwise, offering 75 real-world use cases on this powerful language. We guarantee you'll acquire an A+ understanding of C++, or at least manage to stay calm next time whispers of "Object-Oriented Programming" caress your weary ears.

    • Learn all about C++ w/ 16 hours of content
    • Dive into a powerful, versatile language that powers everything from desktop apps to SQL servers
    • Utilize 75 use cases to better understand how C++ works
    • Seamlessly build upon a C programming background to move to C++
    • Master objects, classes & other object-oriented programming principles
    • Understand how to use modifiers, classes, objects & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    From 0 to 1: C Programming


    KEY FEATURES

    Consider C the programming equivalent of a French mother sauce. Just as chefs can create countless derivatives from a humble Bechamel, so too can developers easily master scores of languages upon learning C. This course will walk you through technical concepts such as loops, strings, and more, allowing you to conquer C and build a wide variety of apps and programs in no time at all.

    • Master C programming w/ 12 hours of content
    • Master language constructs: if/else & case statements, while & for loops, etc.
    • Familiarize yourself w/ functions, arrays & strings
    • Understand basic principles important to general programming
    • Craft a strong foundation for other languages: Objective-C, PHP & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: PHP For Dynamic Websites


    KEY FEATURES

    PHP is a server-side HTML embedded scripting language that provides web developers with a full suite of tools for building dynamic websites. This course takes a highly practical approach to PHP, teaching you how to build a smart website by example, so you can easily adapt what you learn into real-life projects. Any web developer worth their salt is going to want to take this course!

    • Access 76 lectures & 13 hours of content 24/7
    • Install & set up a basic web server w/ PHP
    • Learn web security basics likes validating & sanitizing user input data, mitigating XSS & XSRF attacks, & more
    • Perform MySQL integration & installation to connect to a database
    • Understand cookies, sessions & the differences between the two
    • Master end to end login authentication
    • Explore object oriented PHP, classes, inheritance & polymorphism

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh. They have studied at Stanford, IIM Ahmedabad, the IITs and have spent years (decades, actually) working in tech in the Bay Area, New York, Singapore and Bangalore.

    Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

    Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

    Swetha: Early Flipkart employee, IIM Ahmedabad and IIT Madras alum

    Navdeep: longtime Flipkart employee too, and IIT Guwahati alum

    Learn By Example: The Foundations of HTML, CSS & JavaScript


    KEY FEATURES

    There are many short cuts in web coding that may ultimately lead to issues down the line. This course will teach you solid fundamentals of JavaScript, HTML, and CSS, and give you the skills you need to write efficient and lasting code. Perfect for the inexperienced, this course provides a great background in a range of popular web coding frameworks that will facilitate the learning of other languages in the future.

    • Access 13 hours of content & 93 lessons 24/7
    • Begin your programming path w/ basic HTML
    • Understand inheritance & selection in CSS, two essential concepts
    • Discover closures & prototypes in JavaScript, and how they differ from other languages
    • Learn JSON & its importance to linking back-ends written in Java/front-ends written in JavaScript
    • Use the Document-Object-Model to tie it all together
    • Reach the instructors any time by e-mail

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: beginner

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn by Example: Ruby on Rails


    KEY FEATURES

    Although you hear Ruby on Rails mentioned frequently as one thing, it's actually a combination of two different elements: the Ruby programming language, and the Rails development framework. In this course, you'll tackle each topic individually, learning how to write programs in Ruby and run them on the Rails framework. By course's end, you'll have a firm grasp of this powerful, popular web development tool.

    • Access 69 lectures & 8 hours of content 24/7
    • Build intermediate level web applications using the Rails framework
    • Implement programs in the Ruby programming language
    • Understand Ruby language features like fibers, blocks & mix-ins that are very different from other common languages

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

              The Crash Course Coding Bundle for $39   
    Bring Yourself Up to Speed with Coding Essentials In 63 Hours
    Expires September 16, 2017 23:59 PST
    Buy now and get 98% off

    Fundamentals of Operating Systems


    KEY FEATURES

    Operating systems are the foundations on which our computers, mobile devices, robots, and countless other things run. They are hugely diverse and each offers unique features and suffers from unique drawbacks. Designed to emulate the companion textbook, Modern Operating Systems by Andrew S. Tanenbaum, this course gives you a hands-on view of all the functions within an operating system and their interrelationships.

    • Access 17 lectures & 6 hours of content 24/7
    • Explain the overall objectives & structure of any modern operating system
    • Identify the differences & similarities between operating systems
    • Describe the functions within an operating system & how they work together
    • List what causes many operating system errors & crash conditions
    • Explain how to effectively use the more advanced operating system features to improve productivity
    • Choose which operating system approach best suits individual situations

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    C Programming Part 1 and 2


    KEY FEATURES

    This course combines both parts of a C Programming Bootcamp to move you from the absolute basics of C to more complex data types such as arrays, structures, and pointers using solid programming techniques. C is a high-level, general-purpose programming language that is ideal for developing firmware or portable applications. Because of its wide range of uses, it's an infinitely valuable language to learn, and this course will get you up to speed.

    • Access 38 lectures & 13 hours of content 24/7
    • Learn the basics of C programming
    • Utilize single & multi-dimensional arrays
    • Program w/ structures
    • Implement pointers through a thorough understanding of their use
    • Manipulate character strings
    • Program w/ C effectively

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Java 8 Part 1


    KEY FEATURES

    It's no secret that learning to code can open up a whole new horizon of lucrative career opportunities, and there's no time like the present to learn. In this course, you'll dive into Java, one of the most universally used programming languages, and build a strong foundation in Object-Oriented Programming. Soon enough, you'll be ready to take (and ace!) Oracle's Java SE 8 Programmer Certification 1 exam.

    • Access 31 lectures & 7.5 hours of content 24/7
    • Create programs w/ a strong understanding of the Java paradigm
    • Implement standard Java language constructs like if statements, loops, & switches
    • Utilize arrays in Java
    • Understand objects, classes, methods, inheritance, & scope
    • Discover the basics of Object-Oriented Programming

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Java 8 Part 2


    KEY FEATURES

    In the second part of this 2-part Java course, you'll delve into more advanced Java features like APIs, events, using databases, and much more. You'll gain a more well-rounded, higher-level understanding of Java that you can translate into real job prospects. Plus, by the end of this course, you'll have all the knowledge you need to ace Oracle's Java SE 8 Programmer 1 Certification exam and further enhance your employability.

    • Access 39 lectures & 9 hours of content 24/7
    • Understand Java 8 enhancements like the new date/time API & lambda expressions
    • Handle events in Java
    • Implement interfaces, exceptions & assertions
    • Utilize packages & use ArrayLists
    • Understand abstraction, polymorphism, & encapsulation

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Perl Programming 1 and 2


    KEY FEATURES

    Perl is a family of languages that borrow features from other programming languages including C, shell script, AWK, and sed to provide powerful text processing facilities. In this two part course, you'll learn how to write scripts to automate tasks using the fundamental Perl building blocks. Once you've completed the basics, you'll discuss file handles and tests, managing OS processes, and many more advanced topics.

    • Access 25 courses & 12 hours of content 24/7
    • Describe the fundamental data types for Perl
    • Program w/ branching & looping constructs
    • Input from the keyboard & output from the screen
    • Utilize regular expressions w/ Perl
    • Create & use functions
    • Use loop & flow modifiers, access files w/ file handles, create formatted output, & more
    • Manage operating system processes within Perl, manipulate strings, create hash files, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Python Programming Essentials


    KEY FEATURES

    Python is a general purpose programming language that was explicitly designed to be highly readable, making it one of the ideal languages for beginner programmers. Despite the relative ease to learn, Python is an extremely powerful language, used in the creation of YouTube, Instagram, and Reddit, and commonly seen in machine learning, as well. Because Python has so many different uses, there are always many Python programming jobs available that pay very well. After this course, you'll be well on your way to breaking into one of those jobs.

    • Access 26 lectures & 6 hours of content 24/7
    • Learn the basic structure of Python including Python's data types & control flow constructs
    • Write, debug & execute Python programs
    • Describe & work w/ nested data types & exception handling
    • Understand the basics of modules & object oriented programming in Python
    • Use Python to work w/ files & the operating system

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    Advanced Python Programming


    KEY FEATURES

    While it is one of the easier programming languages to learn, Python has such broad functionality that programs can rapidly escalate from beginner to advanced. This course confronts many of the advanced features of Python so you can enhance your Python knowledge and be eligible for more complicates, better-paying programming jobs. If you want to work in any programming field, you're going to want to take this course.

    • Access 26 lectures & 6 hours of content 24/7
    • Write Python programs using complex data types
    • Create object oriented programs in Python
    • Use Python to create GUI programs
    • Master regular expressions & threads in Python programs
    • Build network programs in Python
    • Work w/ SQL databases
    • Extend Python programs w/ C code

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

    IoT Programming


    KEY FEATURES

    With the advent of very effective and efficient real-time operating systems like FreeRTOS, programmers can take better advantage of real-time features while building Internet of Things devices. This course explores concepts of real-time and multi-tasking programming using FreeRTOS, helping you get close to the hardware performing hands-on lab exercises. Before you know it, you'll know how to design reliable embedded devices for IoT and have an understanding of how multi-tasking operating systems lead to more robust, scalable, and maintainable designs.

    • Access 14 lectures & 3.5 hours of content 24/7
    • Describe the role of asynchronous interrupts & why they make device software difficult
    • Discuss the basic structure & organization of small multi-tasking operating systems
    • Identify the structure & organization of Free RTOS
    • Design, code, & debug device software using the Eclipse IDE
    • Partition device software to most effectively utilize an operating system

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    At GogoTraining, their mission is to help you master the world by unlocking your full potential. They do that by working with technology masters to create curriculum paths in key technologies. These paths allow you to jump in at the level that is right for you so you can master the technology. For more details on this course and instructor, click here.

              Big Data Power Tools Bundle for $36   
    Crunch Numbers & Visualize Data Like a Pro with 39+ Hours of Training In Some of Today's Best Data Analysis Tools
    Expires May 02, 2022 23:59 PST
    Buy now and get 93% off

    Connect the Dots: Linear and Logistic Regression in Excel, Python and R


    KEY FEATURES

    Linear Regression is a powerful method for quantifying the cause and effect relationships that affect different phenomena in the world around us. This course will teach you how to build robust linear models that will stand up to scrutiny when you apply them to real world situations. You'll even put what you've learnt into practice by leveraging Excel, R, and Python to build a model for stock returns.

    • Access 40 lectures & 5 hours of content 24/7
    • Cover method of least squares, explaining variance, & forecasting an outcome
    • Explore residuals & assumptions about residuals
    • Implement simple & multiple regression in Excel, R, & Python
    • Interpret regression results & avoid common pitfalls
    • Introduce a categorical variable

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Connect the Dots: Factor Analysis in Excel, Python and R


    KEY FEATURES

    Factor analysis helps to cut through the clutter when you have a lot of correlated variables to explain a single effect. This course will help you understand factor analysis and its link to linear regression. You'll explore how Principal Components Analysis (PCA) is a cookie cutter technique to solve factor extraction, and how it relates to machine learning.

    • Access 19 lectures & 1.5 hours of content 24/7
    • Understand principal components
    • Discuss Eigen values & Eigen vectors
    • Perform Eigenvalue decomposition
    • Use principal components for dimensionality reduction & exploratory factor analysis
    • Apply PCA to explain the returns of a technology stock like Apple
    • Find the principal components & use them to build a regression model

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Number-Crunching in R


    KEY FEATURES

    This course is an introduction to the R programming language. R has its own set of data structures that take some getting used to, and this course will help you familiarize yourself with the intricacies of data manipulation in R. You'll dive into data analysis with R, visualizing a variety of plots and graphs, descriptive statistics, and much more.

    • Access 59 lectures & 5.5 hours of content 24/7
    • Harness R & R packages to read, process, & visualize data
    • Understand the intricacies of all the different data structures in R
    • Use descriptive statistics to perform a quick study of some data & present results
    • Discuss data analysis & visualization w/ R

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Advanced Analytical Queries in Hive


    KEY FEATURES

    Hive helps you leverage the power of distributed computing and Hadoop for analytical processing. Its interface, HiveQL, is very similar to SQL, making it an especially convenient tool to know. This course will help you take advantage of Hive features that help you tune performance and perform complex transformations.

    • Access 50 lectures & 6 hours of content 24/7
    • Write complex analytical queries on data in Hive & uncover insights
    • Leverage ideas of partitioning & bucketing to optimize queries in Hive
    • Understand what goes on under the hood of Hive w/ HDFS & MapReduce
    • Explore subqueries, table generating functions, windowing, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but some knowledge of SQL is necessary

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: Qlikview


    KEY FEATURES

    A Qlikview app is like an in-memory database. It is a single tool that you can use to transform, summarize, and visualize data. The interactive nature of Qlikview allows you to explore and iterate data very quickly to develop an intuitive feel. In this course, you'll use real-life, practical examples to learn how to work with this tool.

    • Access 26 lectures & 2.5 hours of content 24/7
    • Use list boxes, table boxes, & chart boxes to query data
    • Load data into a QV app from CSV & databases, avoiding synthetic keys & circular references
    • Transform & add new fields in a load script
    • Present your insights effectively using elements like charts, drill downs, & triggers
    • Perform nested aggregations in charts

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: Apache Storm


    KEY FEATURES

    Storm is to real-time stream processing what Hadoop is to batch processing. Using Storm, you can build applications that let you be highly responsive to the latest data and react within seconds and minutes - like finding the latest trending topics on Twitter, or monitoring spikes in payment gateway failures. From simple data transformations to applying machine learning algorithms on the fly, Storm can do it all.

    • Access 36 lectures & 4 hours of content 24/7
    • Understand Spouts & Bolts, which are the building blocks of every Storm topology
    • Run a Storm topology in the local mode & in the remote mode
    • Parallelize data processing within a topology using different grouping strategies
    • Manage reliability & fault-tolerance within Spouts & Bolts
    • Perform complex transformations on the fly using the Trident topology
    • Apply ML algorithms on the fly using libraries like Trident-ML & Storm-R

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Learn By Example: Scala


    KEY FEATURES

    The best way to learn is by example, and in this course you'll get the lowdown on Scala with 65 comprehensive, hands-on examples. Scala is a general-purpose programming language that is highly scalable, making it incredibly useful in building programs. Over this immersive course, you'll explore just how Scala can help your programming skill set, and how you can set yourself apart from other programmers by knowing this efficient tool.

    • Access 67 lectures & 6.5 hours of content 24/7
    • Use Scala w/ an intermediate level of proficiency
    • Read & understand Scala programs, including those w/ highly functional forms
    • Identify the similarities & differences between Java & Scala to use each to their advantages

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Scalable Programming with Scala and Spark


    KEY FEATURES

    The functional programming nature and the availability of a REPL environment make Scala particularly well suited for a distributed computing framework like Spark. Using these two technologies in tandem can allow you to effectively analyze and explore data in an interactive environment with extremely fast feedback. This course will teach you how to best combine Spark and Scala, making it perfect for aspiring data analysts and Big Data engineers.

    • Access 51 lectures & 8.5 hours of content 24/7
    • Use Spark for a variety of analytics & machine learning tasks
    • Understand functional programming constructs in Scala
    • Implement complex algorithms like PageRank & Music Recommendations
    • Work w/ a variety of datasets from airline delays to Twitter, web graphs, & Product Ratings
    • Use the different features & libraries of Spark, like RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming, & GraphX
    • Write code in Scala REPL environments & build Scala applications w/ an IDE

      PRODUCT SPECS

      Details & Requirements

      • Length of time users can access this course: lifetime
      • Access options: web streaming, mobile streaming
      • Certification of completion not included
      • Redemption deadline: redeem your code within 30 days of purchase
      • Experience level required: all levels, but some knowledge of Java or C++ is assumed

      Compatibility

      • Internet required

      THE EXPERT

      Loonycorn is comprised of four individuals—Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh—who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

              Machine Learning and Data Science eBook and Course Bundle for $34   
    A Complete Library On Modern Data Tools & Techniques That Can Give You A Major Career Boost
    Expires April 30, 2022 23:59 PST
    Buy now and get 92% off

    Learning Ansible 2: Second Edition eBook


    KEY FEATURES

    Ansible is an open source automation platform that assists organizations with tasks such as configuration management, application deployment, orchestration, and task automation. In this book, you'll learn about the fundamentals and practical aspects of Ansible 2, getting accustomed to new features and learning how to integrate with cloud platforms like Amazon Web Services. By the end, you'll be able to leverage Ansible parameters to expedite tasks for your organization. Or yourself.

    • Access 240 pages of content 24/7
    • Set up Ansible 2 & an Ansible 2 project in a future-proof way
    • Perform basic operations w/ Ansible 2 such as creating, copying, moving, changing, & deleting files
    • Deploy complete cloud environments using Ansible 2 on AWS & DigitalOcean
    • Explore complex operations w/ Ansible 2
    • Develop & test Ansible playbooks
    • Write a custom module & test it

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Practical DevOps eBook


    KEY FEATURES

    DevOps is a practical field that focuses on delivering business value as efficiently as possible. DevOps encompasses all the flows from code through testing environments to production environments, stressing cooperation between different roles and how they can work together more closely. Through this book, you'll learn how DevOps affects architecture, starting by creating a sample enterprise Java application that you will continue to work with throughout the following chapters.

    • Access 240 pages of content 24/7
    • Understand how all DevOps systems fit together to form a larger whole
    • Set up & familiarize yourself w/ all the tools you need to be efficient w/ DevOps
    • Design an application that is suitable for continuous deployment systems
    • Store & manage your code effectively using different options such as Git, Gerrit, & Gitlab
    • Configure a job to build a sample CRUD application
    • Test the code using automated regression testing w/ Jenkins Selenium
    • Deploy your code using tools such as Puppet, Ansible, Palletops, Chef, & Vagrant
    • Monitor the health of your code w/ Nagios, Munin, & Graphite

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    DevOps Automation Cookbook eBook


    KEY FEATURES

    There has been a recent explosion in tools that allow you to redefine the delivery of infrastructure and applications, using a combination of automation and testing to deliver continuous deployment. This book shows you how to use some of the newest and most exciting tools to revolutionize the way you deliver applications and software. By tackling real-world issues, you'll be guided through a huge variety of tools.

    • Access 334 pages of content 24/7
    • Manage, use, & work w/ code in the Git version management system
    • Create hosts automatically using a simple combination of TFTP, DHCP, & pre-seeds
    • Implement virtual hosts using the ubiquitous VMware ESXi hypervisor
    • Control configuration using Ansible
    • Develop powerful, consistent, & portable containers using Docker
    • Track trends, discover data, & monitor key systems using InfluxDB, syslog, & Sensu
    • Deal efficiently w/ powerful cloud infrastructures using AWS Infrastructure & the Heroku Platform as services

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python Machine Learning eBook


    KEY FEATURES

    Machine learning is transforming the way businesses operate, and being able to understand trends and patterns in complex data is becoming critical for success. Python can help you deliver key insights into your data by running unique algorithms and statistical models. Covering a wide range of powerful Python libraries, this book will get you up to speed with machine learning.

    • Access 454 pages of content 24/7
    • Find out how different machine learning techniques can be used to answer different data analysis questions
    • Learn how to build neural networks using Python libraries & tools such as Keras & Theano
    • Write clean & elegant Python code to optimize the strength of machine learning algorithms
    • Discover how to embed your machine learning model in a web application
    • Predict continuous target outcomes using regression analysis
    • Uncover hidden patterns & structures in data w/ clustering

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Python for Data Science eBook


    KEY FEATURES

    The Python programming language, beyond having conquered the scientific community in the last decade, is now an indispensable tool for data scientists. Using Python will offer you a fast, reliable, cross-platform, and mature environment for data analysis, machine learning, and algorithmic problem solving. This comprehensive guide will help you move beyond the hype and transcend the theory by providing you with a hands-on, advanced study of data science.

    • Access 294 pages of content 24/7
    • Manage data & perform linear algebra in Python
    • Derive inferences from the analysis by performing inferential statistics
    • Solve data science problems in Python
    • Create high-end visualizations using Python
    • Evaluate & apply the linear regression technique to estimate the relationships among variables
    • Build recommendation engines w/ the various collaborative filtering algorithms
    • Apply ensemble methods to improve your predictions

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Practical Data Analysis eBook


    KEY FEATURES

    Data analysis is more important in business than ever and data scientists are getting paid the big bucks to help companies make better informed decisions. This book explains basic data algorithms using hands-on machine learning techniques. You'll perform data-driven innovation processing for several types of data such as text, images, social network graphs, documents, and more.

    • Access 338 pages of content 24/7
    • Acquire, format, & visualize your data
    • Build an image-similarity search engine
    • Generate meaningful visualizations that anyone can understand
    • Get started w/ analyzing social network graphs
    • Find out how to implement sentiment text analysis
    • Install data analysis tools such as Pandas, MongoDB, & Apache Spark
    • Implement machine learning algorithms such as classification or forecasting

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Data Mining with Python


    KEY FEATURES

    Every business wants to gain insights from data to make more informed decisions. Data mining provides a way of finding these insights, and Python is one of the most popular languages with which to perform it. In this course, you will discover the key concepts of data mining and learn how to apply different techniques to gain insight to real-world data. By course's end, you'll have a valuable skill that companies are clamoring to hire for.

    • Access 21 lectures & 2 hours of content 24/7
    • Discover data mining techniques & the Python libraries used for data mining
    • Tackle notorious data mining problems to get a concrete understanding of these techniques
    • Understand the process of cleaning data & the steps involved in filtering out noise
    • Build an intelligent application that makes predictions from data
    • Learn about classification & regression techniques like logistic regression, k-NN classifier, & mroe
    • Predict house prices & the number of TV show viewers

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Saimadhu Polamuri is a data science educator and the founder of Data Aspirant, a Data Science portal for beginners. He has 3 years of experience in data mining and 5 years of experience in Python. He is also interested in big data technologies such as Hadoop, Pig, and Spark. He has a good command of the R programming language and Matlab. He has a rudimentary understanding of Cpp Computer vision library (opencv) and big data technologies.

    Python Machine Learning Projects


    KEY FEATURES

    Machine learning gives you extremely powerful insights into data, and has become so ubiquitous you see it nearly constantly while you browse the internet without even knowing it. Implementations of machine learning are as diverse as recommendation systems to self-driving cars. In this course, you'll be introduced to a unique blend of projects that will teach you what machine learning is all about and how you can use Python to create machine learning projects.

    • Access 26 lectures & 3 hours of content 24/7
    • Work on six independent projects to help you master machine learning in Python
    • Cover concepts such as classification, regression, clustering, & more
    • Apply various machine learning algorithms
    • Master Python's packages & libraries to facilitate computation
    • Implement your own machine learning models

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Alexander T. Combs is an experienced data scientist, strategist, and developer with a background in financial data extraction, natural language processing and generation, and quantitative and statistical modeling. He is currently a full-time lead instructor for a data science immersive program in New York City.

              CUJO Smart Internet Security Firewall + Free Subscription for $224   
    Arm All Your Home's WiFi-Connected Devices with Business Level Security for Life with One Simple Device
    Expires December 31, 2022 23:59 PST
    Buy now and get 9% off





    KEY FEATURES

    "It won't happen to me." The most famous last words known to humanity, and perhaps never more prescient than in the digital age of hacking. Your home is full of smart devices that aren't protected by antivirus software, leaving them and your home network open to unwelcome digital intruders. That means they can control your home's devices, see your online activities, and even steal your personal information. Don't let that happen. Simply connect CUJO to your network and let it use its machine learning protocols to secure every device operating on that network. This smart firewall keeps your home (and family) safe from hacks, viruses, and other web threats that could affect any web-connected device on your network, all without slowing it down. Best of all, this deal includes a free lifetime subscription to all of CUJO's business-level services.

    Demoed at CES 2017
    9.6/10, Digital Reviews
    "CUJO goes beyond traditional security by using a multi-layer approach that combines firewall, antivirus, and malware typically found in separate devices," Yahoo Finance
    "CUJO provides the sophistication of its corporate counterpart, with the elegance and ease-of-use of a home appliance. This new generation of cyber home security gives the physical guard dog a virtual partner," The Huffington Post

    • Secure all your network-connected devices w/ one tool for life
    • Enjoy business-level internet security blocking malicious sites, viruses, & hacks
    • Keep your network safe from phishing, malware, web cam hacks, & other cyber threats
    • Use the mobile app to control & monitor all devices on your network, receive instant threat notifications, & control internet access for select devices
    • Manually override any blocks automated by CUJO so you're in control all the time
    • Institute parental controls like site blocks, access schedules, & time limits w/ new feature coming this month

    PRODUCT SPECS

    Details & Requirements

    • Length of subscription: lifetime
    • Certifications: FCC, ETL, WEEE, CE, Safety Cert
    • Dimensions: 4.875" x 4.875" x 5.75"
    • Ambient temperature: 32°F - 104°F
    • Processor: Dual Core 1GHz
    • Flash memory: 4GB Flash
    • SDRAM memory: 1GB DDR SRAM
    • Acceleration: Cryptographic Hardware Acceleration
    • Ports: 2 1Gbps ethernet ports
    • Power supply input: 100-240V ~ 0.3A 50-60Hz
    • Power supply output: 5V DC 2.0A Max
    • Plugs: compatible with US/CA, EU, AU/NZ, UK, Other countries (only comes with US plug)

    Compatibility

    • WiFi router
    • Modem and router as separate devices
    • Modem and router as one device
    • Wireless extender or access point in addition to your router
    • App is compatible with iOS 8.4.1 or later and Android 4.1.1 or later
    • For full compatibility, click here.

    Includes

    • CUJO device
    • Power cord
    • Ethernet cable
    • Lifetime subscription

              The Complete Introduction to R Programming Bundle for $49   
    Learn to Apply R Programming Concepts for Effective Statistical Analysis & Big Pay-Days with 5 Courses & 3 E-Books
    Expires February 06, 2022 23:59 PST
    Buy now and get 91% off

    Introduction to R Programming


    KEY FEATURES

    It seems like everything these days is driven by data, and statisticians and analysts across industries need to handle this data efficiently and tactfully. That's where R comes in, a powerful programming language that helps developers solve even the most complex data problems. Data scientists are in constant demand, and this extensive course will give you your first taste of R, enabling you to make statistical inferences and run programs that solve important data problems and turn heads.

    • Access 50 lectures & 3.5 hours of content 24/7
    • Get introduced to the R Studio & programming concepts like variables, vectors, arrays, loops, & matrices
    • Visualize data using R's base graphics
    • Learn the fundamentals of univariate & bivariate analysis, computing confidence intervals, interpreting p values, & working w/ statistical tests
    • Perform a full-scale data analysis project

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Selva Prabhakaran is a data scientist with a large E-commerce organization. In his 7 years of experience in data science, he has tackled complex real-world data science problems and delivered production-grade solutions for top multinational companies. Selva lives in Bangalore with his wife.

    Learning R for Data Visualization


    KEY FEATURES

    R is one of the top rising tools in the analytics world. At its core, R is a statistical programming language that provides excellent tools for data mining and analysis, but it also has high-level graphics and machine learning capabilities. In this course, you'll learn how to harness those graphics techniques to represent complex sets of data in inspiring ways.

    • Access 31 lectures & 2 hours of content 24/7
    • Create basic plots like histograms, scatterplots & more
    • Import data in R from popular formats like CSV & Excel tables
    • Build a complete website to import & plot data
    • Learn how to the Shiny package to create fully-featured web pages directly from the R console

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Dr. Fabio Veronesi obtained a PhD in digital soil mapping from Cranfield University and then moved to Zurich, where he has been working for the past three years as a postdoc at ETH. There, he is working on Geoinformation topics, ranging from the application of mathematical techniques to the improvement of shaded relief representations to the use of machine learning to increase the accuracy of wind speed maps.

    During his PhD, he needed to learn a programming language, because commercial applications did not provide the ideal platforms to pursue his research work. Since R has a series of packages created specifically for the application of statistical techniques to soil science, he decided to teach himself this powerful language. Since then, he has been using R every day for his work.

    R Graph Essentials


    KEY FEATURES

    R is an ideal tool for organizing and graphing huge datasets, which is especially valuable to businesses that handle a lot of users and financial details on a daily basis. In this beginner's course to R graphics you'll get a solid grounding in the "base" graphics package in R, as well as more sophisticated packages like lattice and ggplot2. By course's end, you'll be ready to extend your R knowledge to more advanced levels.

    • Access 41 lectures & 2 hours of content 24/7
    • Understand the basic functionality of R graphs
    • Explore different types of graphs for visualizing different types of variables
    • Cover bivariate plots, time series, & high dimensional plots
    • Learn the tips & tricks to the most efficient ways of drawing various types of graphs

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Ehsan Karim is a statistics Ph.D. candidate at the University of British Columbia. His current research interest is in the methods that deal with time-dependent confounding in longitudinal observational studies. Additionally, he is interested in software implementation of methods related to causal inference. He has been a user of R for more than 15 years, and has more than 5 years of experience in teaching various statistical software packages.

    Building Interactive Graphs with ggplot2 and Shiny


    KEY FEATURES

    Ggplot2 is one of R's most popular packages, and is an implementation of the grammar of graphic in R. In this course, you'll move beyond the basic, default graphics offered by R and shows you how to create more advanced and publication-ready plots. Soon enough, you'll be separating from other data job seekers with more sophisticated and interactive graphing abilities.

    • Access 40 lectures & 2 hours of content 24/7
    • Start making elegant & publication-ready plots by learning ggplot2
    • Build statistical plots layer by layer
    • Understand how to combine elements to make new graphics
    • Customize your graphs & make interactive web pages to present your work or analyze your data

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Christophe Ladroue has many years of experience in machine learning and statistics. Most of his work has been focused on developing tools for the analysis of biological data, from genetics to physiology, and his scientific publications span from medical journals to pure statistics. He has used and has been teaching R and ggplot2 for a few years and he occasionally posts related articles on his personal blog: http://chrisladroue.com/blog/

    Learning Data with R Mining


    KEY FEATURES

    As the world continues to generate more and more data at a faster pace, the demand for data mining - generating new information by examining large databases - is growing rapidly as well. R is one of the top tools for data mining, and although data mining is a very broad topic, this course will get you up to speed with the mathematical basics. Once you've got that, you'll be able to directly apply your knowledge to working to solve real-world problems with R.

    • Access 30 lectures & 2.5 hours of content 24/7
    • Understand the mathematical basics of data mining & working w/ algorithms
    • Learn how to solve real-world data mining problems
    • Explore the different disciplines of data mining & the algorithms within them

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Romeo Kienzler is a Chief Data Scientist at the IBM Watson IoT Division. In his role, he is involved in international data mining and data science projects to ensure that clients get the most out of their data. He works as an Associate Professor for data mining at a Swiss University and his current research focus is on cloud-scale data mining using open source technologies including R, ApacheSpark, SystemML, ApacheFlink, and DeepLearning4J. He also contributes to various open source projects. Additionally, he is currently writing a chapter on Hyperledger for a book on Blockchain technologies.

    R: Data Analysis and Visualization Book


    KEY FEATURES

    This enormous book will take you on a complete journey with the R programming language and its many applications to data analysis. Over five connected modules, you'll dive into statistical reasoning, graphing with R, data mining, the quantitative finance concepts of R, and its machine learning capabilities. Across these lessons, you'll have a fully-fledged, nuanced understanding of the many professional applications of R.

    • Access 1,738 pages 24/7
    • Describe & visualize the behavior of data & relationships between data
    • Handle missing data gracefully using multiple imputation
    • Create diverse types of bar charts using the default R functions
    • Familiarize yourself w/ algorithms written in R for spatial data mining, text mining, & more
    • Harness the power of R to build machine learning algorithms w/ real-world data science applications
    • Learn specialized machine learning techniques for text mining, big data, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    R: Unleash Machine Learning Techniques Book


    KEY FEATURES

    Machine learning is one of the most important new frontiers in technology, and the R programming language is one of the best ways to optimize machine learning to solve a diverse range of challenges. Starting with a refresher in R, and then delving into real world problems, this course introduces you to an exciting new way to glean information and answer questions with R.

    • Access 1,123 pages 24/7
    • Implement R machine learning algorithms from scratch
    • Solve real-world problems using machine learning algorithms
    • Write reusable code & build complete machine learning systems from the ground up
    • Evaluate & improve the performance of machine learning models
    • Learn specialized machine learning techniques for text mining, social network data, big data, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Data Visualization: Representing Information on Modern Web Book


    KEY FEATURES

    One of the most important things any good data science expert or analyst must know how to do is creative intelligent visualizations. Through this book, you'll learn how to effectively design and present large amounts of data to demonstrate key insights. You'll learn how to visualize with HTML5, JavaScript, and D3, three of the top technologies for creating interactive visualizations on the web.

    • Harness the power of D3 by building interactive & real-time data-driven web visualizations
    • Find out how to use JavaScript to create compelling visualizations of social data
    • Apply critical thinking to visualization designs & get intimate w/ your dataset to identify its potential visual characteristics
    • Explore the various features of HTML5 to design creative visualizations
    • Discover what data is available on Stack Overflow, Facebook, Twitty, & Google+

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

              Machine Learning with Python Course and E-Book Bundle for $49   
    4 E-Books & 5 Courses to Help You Perform Machine Learning Analytics & Command High-Paying Jobs
    Expires January 22, 2022 23:59 PST
    Buy now and get 92% off

    Deep Learning with TensorFlow


    KEY FEATURES

    Deep learning is the intersection of statistics, artificial intelligence, and data to build accurate models, and is one of the most important new frontiers in technology. TensorFlow is one of the newest and most comprehensive libraries for implementing deep learning. Over this course you'll explore some of the possibilities of deep learning, and how to use TensorFlow to process data more effectively than ever.

    • Access 22 lectures & 2 hours of content 24/7
    • Discover the efficiency & simplicity of TensorFlow
    • Process & change how you look at data
    • Sift for hidden layers of abstraction using raw data
    • Train your machine to craft new features to make sense of deeper layers of data
    • Explore logistic regression, convolutional neural networks, recurrent neural networks, high level interfaces, & more

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Dan Van Boxel is a Data Scientist and Machine Learning Engineer with over 10 years of experience. He is most well-known for "Dan Does Data," a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research and presented findings at the Transportation Research Board and other academic journals.

    Beginning Python


    KEY FEATURES

    Python is the general purpose, multi-paradigm programming language that many professionals consider one of the best beginner language due its relative simplicity and applicability to many coding arenas. This course assumes no prior experience and helps you dive into Python fundamentals to come to grips with this popular language and start your coding odyssey off right.

    • Access 43 lectures & 4.5 hours of content 24/7
    • Learn variables, numbers, strings, & more essential components of Python
    • Make decisions on your programs w/ conditional statements
    • See how functions play a major role in providing a high degree of code recycling
    • Create modules in Python
    • Perform image manipulations w/ Python

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    William Fiset is a Mathematics and Computer Science Honors student at Mount Allison University with in interest in competitive programming. William has been a Python developer for +4 years, starting his early Python experience with game development. He owns a popular YouTube channel that teaches Python to beginners and the basics of game development.

    Deep Learning with Python


    KEY FEATURES

    You've seen deep learning everywhere, but you may not have realized it. This discipline is one of the leading solutions for image recognition, speech recognition, object recognition, and language translation - basically the tools you see Google roll out every day. Over this course, you'll use Python to expand your deep learning knowledge to cover backpropagation and its ability to train neural networks.

    • Access 19 lectures & 2 hours of content 24/7
    • Train neural networks in deep learning & to understand automatic differentiation
    • Cover convolutional & recurrent neural networks
    • Build up the theory that covers supervised learning
    • Integrate search & image recognition, & object processing
    • Examine the performance of the sentimental analysis model

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Eder Santana is a PhD candidate in Electrical and Computer Engineering. His thesis topic is on Deep and Recurrent neural networks. After working for 3 years with Kernel Machines (SVMs, Information Theoretic Learning, and so on), Eder moved to the field of deep learning 2.5 years ago, when he started learning Theano, Caffe, and other machine learning frameworks. Now, Eder contributes to Keras: Deep Learning Library for Python. Besides deep learning, he also likes data visualization and teaching machine learning, either on online forums or as a teacher assistant.

    Data Mining with Python


    KEY FEATURES

    Every business wants to gain insights from data to make more informed decisions. Data mining provides a way of finding these insights, and Python is one of the most popular languages with which to perform it. In this course, you will discover the key concepts of data mining and learn how to apply different techniques to gain insight to real-world data. By course's end, you'll have a valuable skill that companies are clamoring to hire for.

    • Access 21 lectures & 2 hours of content 24/7
    • Discover data mining techniques & the Python libraries used for data mining
    • Tackle notorious data mining problems to get a concrete understanding of these techniques
    • Understand the process of cleaning data & the steps involved in filtering out noise
    • Build an intelligent application that makes predictions from data
    • Learn about classification & regression techniques like logistic regression, k-NN classifier, & mroe
    • Predict house prices & the number of TV show viewers

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Saimadhu Polamuri is a data science educator and the founder of Data Aspirant, a Data Science portal for beginners. He has 3 years of experience in data mining and 5 years of experience in Python. He is also interested in big data technologies such as Hadoop, Pig, and Spark. He has a good command of the R programming language and Matlab. He has a rudimentary understanding of Cpp Computer vision library (opencv) and big data technologies.

    Data Visualization: Representing Information on the Modern Web E-Book


    KEY FEATURES

    You see graphs all over the internet, the workplace, and your life - but do you ever stop to consider how all that data has been visualized? There are many tools and programs that data scientists use to visualize massive, disorganized sets of data. This e-book contains content from "Data Visualization: A Successful Design Process" by Andy Kirk, "Social Data Visualization with HTML5 and JavaScript" by Simon Timms," and "Learning d3.js Data Visualization, Second Edition" by Andrew Rininsland and Swizec Teller, all professionally curated to give you an easy-to-follow track to master data visualization in your own work.

    • Harness the power of D3 by building interactive & real-time data-driven web visualizations
    • Find out how to use JavaScript to create compelling visualizations of social data
    • Identify the purpose of your visualization & your project’s parameters to determine overriding design considerations across your project’s execution
    • Apply critical thinking to visualization design & get intimate with your dataset to identify its potential visual characteristics
    • Explore the various features of HTML5 to design creative visualizations
    • Discover what data is available on Stack Overflow, Facebook, Twitter, & Google+
    • Gain a solid understanding of the common D3 development idioms

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Master the Art of Design Patterns E-Book


    KEY FEATURES

    Get a complete introduction to the many uses of Python in this curated e-book drawing content from "Python 3 Object-Oriented Programming, Second Edition" by Dusty Phillips, "Learning Python Design Patterns, Second Edition" by Chetan Giridhar, and "Mastering Python Design Patterns" by Sakis Kasampalis. Once you've got your feet wet, you'll focus in on the most common and useful design patterns from a Python perspective. By course's end, you'll have a complex understanding of designing patterns with Python, allowing you to develop better coding practices and create systems architectures.

    • Discover what design patterns are & how to apply them to writing Python
    • Implement objects in Python by creating classes & defining methods
    • Separate related objects into a taxonomy of classes & describe the properties & behaviors of those objects via the class interface
    • Understand when to use object-oriented features & when not to use them
    • Explore the design principles that form the basis of software design, such as loose coupling, the Hollywood principle, & the Open Close principle, & more
    • Use Structural Design Patterns to find out how objects & classes interact to build larger applications
    • Improve the productivity & code base of your application using Python design patterns
    • Secure an interface using the Proxy pattern

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Deeper Insights into Machine Learning E-Book


    KEY FEATURES

    Machine learning and predictive analytics are becoming one of the key strategies for unlocking growth in a challenging contemporary marketplace. Consequently, professionals who can run machine learning systems are in high demand and are commanding high salaries. This e-book will help you get a grip on advanced Python techniques to design machine learning systems.

    • Learn to write clean & elegant Python code that will optimize the strength of your algorithms
    • Uncover hidden patterns & structures in data w/ clustering
    • Improve accuracy & consistency of results using powerful feature engineering techniques
    • Gain practical & theoretical understanding of cutting-edge deep learning algorithms
    • Solve unique tasks by building models
    • Come to grips w/ the machine learning design process

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Python: Real-World Data Science E-Book


    KEY FEATURES

    Data science is one of the most in-demand fields today, and this e-book will guide you to becoming an efficient data science practitioner in Python. Once you've nailed down Python fundamentals, you'll learn how to perform data analysis with Python in an example-driven way. From there, you'll learn how to scale your knowledge to processing machine learning algorithms.

    • Implement objects in Python by creating classes & defining methods
    • Get acquainted w/ NumPy to use it w/ arrays & array-oriented computing in data analysis
    • Create effective visualizations for presenting your data using Matplotlib
    • Process & analyze data using the time series capabilities of pandas
    • Interact w/ different kind of database systems, such as file, disk format, Mongo, & Redis
    • Apply data mining concepts to real-world problems
    • Compute on big data, including real-time data from the Internet
    • Explore how to use different machine learning models to ask different questions of your data

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done–whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

    Mastering Python


    KEY FEATURES

    Python is one of the most popular programming languages today, enabling developers to write efficient, reusable code. Here, you'll add Python to your repertoire, learning to set up your development environment, master use of its syntax, and much more. You'll soon understand why engineers at startups like Dropbox rely on Python: it makes the process of creating and iterating upon apps a piece of cake.

    • Master Python w/ 3 hours of content
    • Build Python packages to efficiently create reusable code
    • Creating tools & utility programs, and write code to automate software
    • Distribute computation tasks across multiple processors
    • Handle high I/O loads w/ asynchronous I/O for smoother performance
    • Utilize Python's metaprogramming & programmable syntax features
    • Implement unit testing to write better code, faster

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, it has published over 3,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done –whether that’s specific learning on an emerging technology or optimizing key skills in more established tools.

              Machine Learning & AI for Business Bundle for $39   
    Discover Artificial Intelligence, Machine Learning & the R Programming Language in This 4-Course Bundle
    Expires January 08, 2022 23:59 PST
    Buy now and get 96% off

    Artificial Intelligence & Machine Learning Training


    KEY FEATURES

    Artificial intelligence is the simulation of human intelligence through machines using computer systems. No, it's not just a thing of the movies, artificial intelligence systems are used today in medicine, robotics, remote sensors, and even in ATMs. This booming field of technology is one of the most exciting frontiers in science and this course will give you a solid introduction.

    • Access 91 lectures & 17 hours of content 24/7
    • Identify potential areas of applications of AI
    • Learn basic ideas & techniques in the design of intelligent computer systems
    • Discover statistical & decision-theoretic modeling paradigms
    • Understand how to build agents that exhibit reasoning & learning
    • Apply regression, classification, clustering, retrieval, recommender systems, & deep learning

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

    Introduction to Machine Learning


    KEY FEATURES

    Machine learning is the science of getting computers to act without being explicitly programmed by harvesting data and using algorithms to determine outputs. You see this science in action all the time in spam filtering, search engines, and online ad space, and its uses are only expanding into more powerful applications like self-driving cars and speech recognition. In this crash course, you'll get an introduction to the mechanisms of algorithms and how they are used to drive machine learning.

    • Access 10 lectures & 2 hours of content 24/7
    • Learn machine learning concepts like K-nearest neighbor learning, non-symbolic machine learning, & more
    • Explore the science behind neural networks
    • Discover data mining & statistical pattern recognition
    • Gain practice implementing the most effective machine learning techniques

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

    Data Science and Machine Learning with R (Part #1): Understanding R


    KEY FEATURES

    The R programming language has become the most widely use language for computational statistics, visualization, and data science - all essential tools in artificial intelligence and machine learning. Companies like Google, Facebook, and LinkedIn use R to perform business data analytics and develop algorithms that help operations move fluidly. In this introductory course, you'll learn the basics of R and get a better idea of how it can be applied.

    • Access 33 lectures & 6 hours of content 24/7
    • Install R studio & learn the basics of R functions
    • Understand data types in R, the recycling rule, special numerical values, & more
    • Explore parallel summary functions, logical conjunctions, & pasting strings together
    • Discover the evolution of business analytics

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule.

    Data Science and Machine Learning with R (Part #2): Statistics with R


    KEY FEATURES

    Further your understanding of R with this immersive course on one of the most important tools for business analytics. You'll discuss data manipulation and statistics basics before diving into practical, functional use of R. By course's end, you'll have a strong understanding of R that you can leverage on your resume for high-paying analytics jobs.

    • Access 30 lectures & 6 hours of content 24/7
    • Understand variables, quantiles, data creation, & more
    • Calculate variance, covariance, & build scatter plots
    • Explore probability & distribution
    • Use practice problems to reinforce your learning

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime access
    • Access options: web streaming, mobile streaming
    • Certification of completion included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate

    Compatibility

    • Internet required

    THE EXPERT

    An initiative by IIT IIM Graduates, eduCBA is a leading global provider of skill based education addressing the needs 500,000+ members across 40+ Countries. Our unique step-by-step, online learning model along with amazing 1700+ courses prepared by top notch professionals from the Industry help participants achieve their goals successfully. All our training programs are Job oriented skill based programs demanded by the Industry. At eduCBA, it is a matter of pride to us to make job oriented hands on courses available to anyone, any time and anywhere. Therefore we ensure that you can enroll 24 hours a day, seven days a week, 365 days a year. Learn at a time and place, and pace that is of your choice. Plan your study to suit your convenience and schedule. For more details on this course and instructor, click here.

              Big Data Mastery with Hadoop Bundle for $39   
    Tame Massive Data Sets with 44 Hours of Extensive Hadoop Training
    Expires January 02, 2022 23:59 PST
    Buy now and get 91% off

    Taming Big Data with MapReduce & Hadoop


    KEY FEATURES

    Big data is hot, and data management and analytics skills are your ticket to a fast-growing, lucrative career. This course will quickly teach you two technologies fundamental to big data: MapReduce and Hadoop. Learn and master the art of framing data analysis problems as MapReduce problems with over 10 hands-on examples. Write, analyze, and run real code along with the instructor– both on your own system, and in the cloud using Amazon's Elastic MapReduce service. By course's end, you'll have a solid grasp of data management concepts.

    • Learn the concepts of MapReduce to analyze big sets of data w/ 56 lectures & 5.5 hours of content
    • Run MapReduce jobs quickly using Python & MRJob
    • Translate complex analysis problems into multi-stage MapReduce jobs
    • Scale up to larger data sets using Amazon's Elastic MapReduce service
    • Understand how Hadoop distributes MapReduce across computing clusters
    • Complete projects to get hands-on experience: analyze social media data, movie ratings & more
    • Learn about other Hadoop technologies, like Hive, Pig & Spark

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels

    Compatibility

    • Internet required

    THE EXPERT

    Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis. For more details on this course and instructor, click here. This course is hosted by StackSkills, the premier eLearning destination for discovering top-shelf courses on everything from coding—to business—to fitness, and beyond!

    Projects in Hadoop and Big Data: Learn by Building Apps


    KEY FEATURES

    Hadoop is perhaps the most important big data framework in existence, used by major data-driven companies around the globe. Hadoop and its associated technologies allow companies to manage huge amounts of data and make business decisions based on analytics surrounding that data. This course will take you from big data zero to hero, teaching you how to build Hadoop solutions that will solve real world problems - and qualify you for many high-paying jobs.

    • Access 43 lectures & 10 hours of content 24/7
    • Learn how technologies like Mapreduce apply to clustering problems
    • Parse a Twitter stream Python, extract keywords w/ Apache Pig, visualize data w/ NodeJS, & more
    • Set up a Kafka stream w/ Java code for producers & consumers
    • Explore real-world applications by building a relational schema for a health care data dictionary used by the US Department of Veterans Affairs
    • Log collections & analytics w/ the Hadoop distributed file system using Apache Flume & Apache HCatalog

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: intermediate

    Compatibility

    • Internet required

    THE EXPERT

    Eduonix creates and distributes high-quality technology training content. Their team of industry professionals has been training manpower for more than a decade. They aim to teach technology the way it is used in the industry and professional world. They have a professional team of trainers for technologies ranging from Mobility, Web and Enterprise, and Database and Server Administration.

    Learn Hadoop, MapReduce and Big Data from Scratch


    KEY FEATURES

    Have you ever wondered how major companies, universities, and organizations manage and process all the data they've collected over time? Well, the answer is Big Data, and people who can work with it are in huge demand. In this course you'll cover the MapReduce algorithm and its most popular implementation, Apache Hadoop. Throughout this comprehensive course, you'll learn essential Big Data terminology, MapReduce concepts, advanced Hadoop development, and gain a complete understanding of the Hadoop ecosystem so you can become a big time IT professional.

    • Access 76 lectures & 15.5 hours of content 24/7
    • Learn how to setup Node Hadoop pseudo clusters
    • Understand & work w/ the architecture of clusters
    • Run multi-node clusters on Amazon's Elastic Map Reduce (EMR)
    • Master distributed file systems & operations including running Hadoop on HortonWorks Sandbok & Cloudera
    • Use MapReduce w/ Hive & Pig
    • Discover data mining & filtering
    • Learn the differences between Hadoop Distributed File System vs. Google File System

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: beginner

    Compatibility

    • Internet required

    THE EXPERT

    Eduonix creates and distributes high-quality technology training content. Their team of industry professionals has been training manpower for more than a decade. They aim to teach technology the way it is used in the industry and professional world. They have a professional team of trainers for technologies ranging from Mobility, Web and Enterprise, and Database and Server Administration.

    Website - www.eduonix.com

    For more details on this course and instructor, click here. This course is hosted by StackSkills, the premier eLearning destination for discovering top-shelf courses on everything from coding—to business—to fitness, and beyond!

    Introduction to Hadoop


    KEY FEATURES

    Hadoop is one of the most commonly used Big Data frameworks, supporting the processing of large data sets in a distributed computing environment. This tool is becoming more and more essential to big business as the world becomes more data-driven. In this introduction, you'll cover the individual components of Hadoop in detail and get a higher level picture of how they interact with one another. It's an excellent first step towards mastering Big Data processes.

    • Access 30 lectures & 5 hours of content 24/7
    • Install Hadoop in Standalone, Pseudo-Distributed, & Fully Distributed mode
    • Set up a Hadoop cluster using Linux VMs
    • Build a cloud Hadoop cluster on AWS w/ Cloudera Manager
    • Understand HDFS, MapReduce, & YARN & their interactions

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: beginner
    • IDE like IntelliJ or Eclipse required (free to download)

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Advanced MapReduce in Hadoop


    KEY FEATURES

    Take your Hadoop skills to a whole new level by exploring its features for controlling and customizing MapReduce to a very granular level. Covering advanced topics like building inverted indexes for search engines, generating bigrams, combining multiple jobs, and much more, this course will push your skills towards a professional level.

    • Access 24 lectures & 4.5 hours of content 24/7
    • Cover advanced MapReduce topics like mapper, reducer, sort/merge, partitioning, & more
    • Use MapReduce to build an inverted index for search engines & generate bigrams from text
    • Chain multiple MapReduce jobs together
    • Write your own customized partitioner
    • Sort a large amount of data by sampling input files

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • IDE like IntelliJ or Eclipse required (free to download)

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Database Operations via Hadoop and MapReduce


    KEY FEATURES

    Analyzing data is an essential to making informed business decisions, and most data analysts use SQL queries to get the answers they're looking for. In this course, you'll learn how to map constructs in SQL to corresponding design patterns for MapReduce jobs, allowing you to understand how these two programs can be leveraged together to simplify data problems.

    • Access 49 lectures & 1.5 hours of content 24/7
    • Master the art of "thinking parallel" to break tasks into MapReduce transformations
    • Use Hadoop & MapReduce to implement a SQL query like operations
    • Work through SQL constructs such as select, where, group by, & more w/ their corresponding MapReduce jobs in Hadoop

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • IDE like IntelliJ or Eclipse required (free to download)

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    Recommendation Systems Via Hadoop And MapReduce


    KEY FEATURES

    You see recommendation algorithms all the time, whether you realize it or not. Whether it's Amazon recommending a product, Facebook recommending a friend, Netflix, a new TV show, recommendation systems are a big part of internet life. This is done by collaborative filtering, something you can perform through MapReduce with data collected in Hadoop. In this course, you'll learn how to do it.

    • Access 4 lectures & 1 hour of content 24/7
    • Master the art of "thinking parallel" to break tasks into MapReduce transformations
    • Use Hadoop & MapReduce to implement a recommendations algorithm
    • Recommend friends on a social networking site using a MapReduce collaborative filtering algorithm

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • IDE like IntelliJ or Eclipse required (free to download)

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

    K-Means Clustering via Hadoop And MapReduce


    KEY FEATURES

    Data, especially in enterprise, will often expand at a rapid scale. Hadoop excels at compiling and organizing this data, however, to do anything meaningful with it, you may need to run machine learning algorithms to decipher patterns. In this course, you'll learn one such algorithm, the K-Means clustering algorithm, and how to use MapReduce to implement it in Hadoop.

    • Access 7 lectures & 1.5 hours of content 24/7
    • Master the art of "thinking parallel" to break tasks into MapReduce transformations
    • Use Hadoop & MapReduce to implement the K-Means clustering algorithm
    • Convert algorithms into MapReduce patterns

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels
    • IDE like IntelliJ or Eclipse required (free to download)

    Compatibility

    • Internet required

    THE EXPERT

    Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

              The Advanced Guide to Deep Learning and Artificial Intelligence Bundle for $42   
    This High-Intensity 14.5 Hour Bundle Will Help You Help Computers Address Some of Humanity's Biggest Problems
    Expires November 28, 2021 23:59 PST
    Buy now and get 91% off

    Deep Learning: Convolutional Neural Networks in Python


    KEY FEATURES

    In this course, intended to expand upon your knowledge of neural networks and deep learning, you'll harness these concepts for computer vision using convolutional neural networks. Going in-depth on the concept of convolution, you'll discover its wide range of applications, from generating image effects to modeling artificial organs.

    • Access 25 lectures & 3 hours of content 24/7
    • Explore the StreetView House Number (SVHN) dataset using convolutional neural networks (CNNs)
    • Build convolutional filters that can be applied to audio or imaging
    • Extend deep neural networks w/ just a few functions
    • Test CNNs written in both Theano & TensorFlow
    Note: we strongly recommend taking The Deep Learning & Artificial Intelligence Introductory Bundle before this course.

    PRODUCT SPECS

    Details & Requirements

    • Length of time users can access this course: lifetime
    • Access options: web streaming, mobile streaming
    • Certification of completion not included
    • Redemption deadline: redeem your code within 30 days of purchase
    • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
    • All code for this course is available for download here, in the directory cnn_class

    Compatibility

    • Internet required

    THE EXPERT

    The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

    He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

    He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

    Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

    Unsupervised Deep Learning in Python