LINUX KNALLER ZUINIGE INTEL IVY BRIDGE DUAL CORE PC 2GB/32GB SSD/VGA/HDMI NIEUW E 169!   

Loopt zoooo lekker met LINUX... zie het nieuwste amateur filmpje (geen namaak maar de echte demo) https://youtu.be/yCTr7Rhi7mA MET 32 GB ssd (tot 10 keer sneller dan conventionele harddisk) en 2GB geheugen...  echt een prima snelheid 2322 PASSMARK (bijna 3 keer sneller dan de huidige budget instap computers en dan rekenen we niet eens de extra snelheid van de SSD mee)... Moderne nieuwe PC met Ivy Bridge processor en kost maar E 169. Voorzien van linux mint, gratis open source besturingssysteem!!

Verder in deze computer een mooi OEM mainboard met VGA en HDMI aansluiting, lan, SATA3, sound, alles on board. Alles geassembleerd en getest in een mooie compacte pianolak minitorer met 400 watt voeding. Prima thuis computer voor een superlage prijs! Profiteer nu!

Naar wens nog een DVD brander bijconfigureren, harde schijf voor opslag, maak hem helemaal naar wens!! U kunt hiervan nog een lichte gamers PC maken door te opwaarderen met een pci express videokaart. NB: alleen voor USB Keyboard/Muis.

Ivy Bridge G1610T Benchmark

This chart comparing CPU benchmarks is made using thousands of PerformanceTest benchmark results and is updated daily. The CPU you selected has been found and highlighted from amongst the high, medium and low end CPU charts. If you would like to select a different CPU type please return to the CPU List.

 http://www.computerstunt.nl/afbeeldingen/g1610tpassmark.png


          Alexa and Watson   
An unexpected combination. How practical is this?  AI helping to build AI.

Building Alexa Skills With IBM Watson and OpenWhisk
Check out this open source project that uses IBM Watson Conversation and OpenWhisk to efficiently create conversation flows for Alexa skills.   by Niklas Heidloff  
          Ipstenu (Mika Epstein) on "restore plugin page as before"   

1) This is not a suggestion for WordPress CORE, which is what the ideas forum is intended :)

2) I'm sorry, but we'll only be moving forward for a lot of reasons (including the fact that the new codebase is open source, works better, and is stable). If you'd like to HELP us improve the UX (which we all agree could use work), please consider volunteering. There are meetings Wednesdays in #meta

https://make.wordpress.org/meta/2017/03/30/announcing-the-new-wordpress-plugin-directory/


          Red Hat Pairs Enterprise-Grade Kubernetes With Massively-Scalable Cloud Infrastructure In Latest Version Of Red Hat Cloud Suite   
Red Hat, Inc., the world\'s leading provider of open source solutions, today announced the availability of the latest version of Red Hat Cloud Suite. As cloud-native, containerized applications grow in importance to CIOs focused on enterprise digital transformation, IT infrastructure and management technologies need to adapt to the unique needs posed by modern applications, while still maintaining existing systems.
          Fake Artist Portfolio Generator Questions The Open Source Web [Video]   
“Pro-Folio” is a portfolio website built from fictional identities of artists, created by an algorithm using the open source web.
          Irish Dictionary.org — Open Source Language resource   

Eoin from www.irishdictionary.org works hard to bring Irish language resources to as many people as possible, for free.

His English Irish dictionary online is a great example of this.

It's a wikidictionary - created by submissions from its users. The dictionary has over 1,500 entries, and it's growing every day. You can also download a pdf containing all the dictionary entries.

Eoin's aim is to have a simple, accessible dictionary with fast search. You can search in Irish and in English. Eoin himself is the first to admit his dictionary isn't perfect - but it's better to have something than nothing.

Eoin's approach and goals can be summed up in two old Irish proverbs:

Tús maith, leath na hoibre - A good start is half the work

and

Trí na chéile a thógtar na cáisléain
- In our togetherness, castles are built.


          Ditto 3.18.46    
Ditto is an open source extension to the standard Windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, HTML, custom formats.
          AkelPad 4.7.2   
AkelPad to edytor tekstu na licencji Open Source.
          OSLiC: Manifesto   
The Open Source License Compendium Manifesto
          El Cebit en imágenes   
Aquí os dejo una pequeña galería con algunas de las fotos que tomé en el Cebit… “Fotos del CeBIT 2010.” From CeBIT 2010, posted by Javier Turégano on 3/06/2010 (33 items) Nada más entrar ya olía todo a Open Source. Las charlas de Klaus Knopper, creador de Knoppix, eran las más concurridas, todo un … […]
          MediaCoder 0.8.28.5582   
MediaCoder is a free tool which unites all the audio and video codecs of the Open Source community; it is especially easy to use.
          CyberFox 52.2.1 Final [Latest]   

Cyberfox – is a fast, secure and easy to use web browser, powered by the popular Mozilla Firefox open source code. This is designed by 8pecxstudios, taking over where Mozilla left off, working to make a fast, stable and reliable web browser accessible to all. It comes with many customizable options allowing you to personalize …

The post CyberFox 52.2.1 Final [Latest] appeared first on S0ft4PC.


          If Mobile is not the Future, then What is?   

During 2008, I have prophesied 13 things about Internet Marketing that came to pass in recent years. 8 years passed and I see many of them came true. e.g.

  • Google’s Search Preview (though it did not last long)
  • Instant search (or search-as-you-type)
  • Priority of social media over emails
  • The rise of Tablets (I called them Handhelds, back then)
  • The rise of widgets and connected data
  • Though some of them still have not taken full form, but they are prospectively waiting to be fulfilled:
  • The death of mobile operators
  • Web-based marketing analytics (Bigdata is now showing prospects into this prophecy)
  • TV Channel + Internet + Mobile Subscription coming into one single platform or service (you can find it in my comment)

​Though I am not a Prophet or clairvoyant, I have some predictions for mobile phone or mobility for 5-10 years ahead of 2016:

Mobile may seem like the future, but as the capacity of mobile devices grow, you might see a different scenario in future. Mobile is really not the future, but convergence of various devices, flexible screens and responsiveness of display, cloud computing and cloud OS, and supreme mobility are the actual future. 

When WAP was becoming popular, one friend told me, “This is the future”. But, WAP did not see much light after a few years. As the capacity of the devices increased, WAP was seem unnecessary. Now a days, many people are saying that “Mobile is the Future”. But, I oppose that, since mobile may not remain as THE mobile we know today.

If you look back into the evolution of the devices in your home, then you might see that Cassette Player, VCR, Camera, Radio, and even TV has now converged into one device. It’s really not about making the devices small. The most important factor here is to converge multiple devices into one to offer more utility.  

So, mobile is not the future, convergence is!

If we take this example of the past and apply it on the situations at present, then see that your laptop, mobile, tab, and TV are separate devices. If convergence is to happen, then these devices too will merge into one device or platform. But, how can you get the power and flexibility of a laptop in a mobile phone?  

The answer lies in your TV in the wall and its evolution. See how TV is getting thin day by day. As LCD is already in market, we might see foldable screens in very near future. Samsung and Oppo are competing to roll out foldable screen phones. However, these phones still do have steel backbone, but in future you might backbone-less Amoeba-ic phones (or lets call then Floppy Handhelds). It might be folded into a 6" inch and while spread, you might get 60" screen.  

Moreover, with Smart TVs, more and more people will start browsing the internet in their TVs. So, don’t think that the future relies on small screen, rather you may embrace the screen-size-you-like in future. Therefore, those who have not thought of making their site responsive for various screen size, they are leaving out the big picture.

So, mobile is not the future, flexibility (aka responsive) is!

Dropbox had made our file accessibility more flexible. There also, Google Drive, OneDrive, iCloud, and many others including cloud storage for images. With the help of these cloud storage, you can view your files in any device.  

But, one thing you could not make it remotely accessible like your files, is “The Operating System”. You still need to install operating systems in separate devices. But, in near future, you might not need to install any OS in any device. You can use any Open Source OS like Firefox OS and get access to files and OS Arrangement in any device. Or have the same look and feel as you change from one device to another. All you might need is a browser or window. And you also can save file in the cloud as much as you want.

So, mobile is not the future, cloud computing is! 

Now a days, we carry many devices like flash drive, pen drive, SD cards, etc. But in near future, you may not need to carry anything as long as you have internet access. Any device in public place can be turned into your phone, TV, and/or laptop. As mobility of information and OS gets introduced, you might not need to carry anything with you.

So, mobile is not the future, mobility is!


           An Analysis of COSPA – A Consortium for Open Source in the Public Administration    
Morgan, Lorraine (2005) An Analysis of COSPA – A Consortium for Open Source in the Public Administration. In: First International Conference on Open Source Systems, 11-15 July 2005, Genova.
           Assessing the Role of Open Source Software in the European Secondary Software Sector: A Voice from Industry    
Agerfalk, Par J. and Deverell, Andrea and Fitzgerald, Brian and Morgan, Lorraine (2005) Assessing the Role of Open Source Software in the European Secondary Software Sector: A Voice from Industry. In: 1st International Conference on Open Source Software, 11-15 July 2005, Genoa, Italy.
          CIOsynergy Announces OSSCube as Platinum Sponsor For Dallas Event.   

OSSCube CEO Lavanya Rastogi will present Dallas IT professionals with an exciting view on the future of open source technology for businesses.

(PRWeb January 22, 2015)

Read the full story at http://www.prweb.com/releases/2015/01/prweb12458826.htm


          SandstoneDb, Simple ActiveRecord Style Persistence in Squeak   

On Persistence, Still Not Happy

Persistence is hard and something you need to deal with in every app. I've written about what's available in Squeak, written about simpler image based solutions for really small systems where just dumping out to one file is sufficient; however, nothing I've used so far has satisfied me completely for various reasons, so before I get to the point of this post, let me do a quick review of my current thoughts on the matter.

Relational Databases

Tired of em, I don't care how much they have to offer me in the areas of declarative indexing and queries, transactions, triggers, stored procedures, views, or any of the handful of things they offer that I don't really want from them. The price they make me pay in programming just isn't worth it for small systems. I don't want my business logic in the database. I don't want to use a big mess of tables to model all my data as a handful of global variables, aka tables, that multiple applications share and modify freely. What I do want from them, transactional persistence of my object model, they absolutely suck at and all attempts to shoehorn an object model into a relational database ends up being an exercise in frustration, compromise, and cussing. I think using a database as an integration point between multiple applications is a terrible idea that just leads to a bunch of fragile applications and a data model you can't change for fear of breaking them. Enough said, on to more object oriented approaches!

Active Record

Ruby on Rails has brought the ActiveRecord pattern mainstream, which was as far as I know, first popularized in Martin Fowler's book Patterns Of Enterprise Application Architecture which largely dealt with all the various known methods of mapping objects to databases. Initially I wasn't a fan of the pattern and preferred the more complex domain model with a meta data mapping, but having written an object relational mapper at a previous gig, used several open source ones, as well as tried out several pure object databases, I've come to appreciate the simplicity and explicitness of its simple API.

If you have to work with a relational database, this is a fairly good compromise for doing so. You can't bind a real object model to a relational database cleanly without massive effort, so don't try, just revel in the fact that you're editing rows rather than trying to hide it. It works reasonably well, and it's easy to get other team members to use it because it's simple.

"Simplicity is the ultimate sophistication" -- Leonardo Da Vinci

Other Approaches

A total OO purist, or a young one still enamored with patternitis, wouldn't want objects to save themselves like an ActiveRecord does. You can see this in the design of most object oriented databases available, it's considered a sin to make you inherit from a class to obtain persistence. I used to be one of those guys too, but I've changed my mind in favor of pragmatism. The typical usage pattern is to create a connection to the OODB server which basically presents itself to you as a persistent dictionary of some sort where you put objects into it and then "commit" any unsaved changes. They will save any object and leave it up to you what your object should look like, intruding as little as possible on your domain, so they say.

Behind the scenes there's some voodoo going on where this persistent dictionary tries to figure out what's actually been changed either by having installed some sort of write barrier that marks objects dirty automatically when they get changed, comparing your objects to a cached copy created when they were originally read, or sometimes even explicitly forcing the programmer to manually mark the object dirty. The point of all of this complexity of course, is to minimize writes to the disk to reduce IO and keep things snappy.

Simplicity Matters

What seems to be overlooked in this approach is the amount of accidental complexity that is imposed upon the programmer. If I have to open a connection to get a persistent dictionary to work with, I now have to store this configuration information, manage the creation of this connection, possibly pool it if it's an expensive resource, and decide where to hang this dictionary so I can have access to it from within my application. This is usually some sort of current session object I can always reach such as a WASession subclass in Seaside. Now, this all actually seems pretty normal, but should it be?

I'm not saying this is wrong, but one has to be aware of the trade-offs made for any particular API or style. At some point you have to wonder if we're not suffering from some form of technical Stockholm syndrome where we forget that all this complexity is killing us and we forget just how painful it really is because we've grown accustomed to it.

Sit down and try explaining one of your programs that use some of this stuff to another programmer unfamiliar with your setup. If you really pay attention, you'll notice just how much of the explaining you're doing has nothing to do with the actual problem you're trying to solve. Much of it is just accidental complexity for plumbing and scaffolding that crept in. If you spend more time explaining the persistence framework than your program and the actual problem it's solving, then maybe that's a problem you'll want to revisit sometime. Do I really want to write code somewhat like...

user := User firstName: 'Ramon' lastName: 'Leon'.
self session commit: [ self session users at: user id put: user ].

with all the associated configuration setup and cognitive load of remembering what I called the accessor to get #users and how I'm hashing the user for this or that class while remembering the semantics of what exactly is committed, or whether I forgot to mark something dirty, or would I rather do something more strait forward and simple like this...

user := User firstName: 'Ramon' lastName: 'Leon'.
user save.

And just assume the object knows how to persist itself and there's no magic going on? If I say save I just know it commits to disk, whether there were any changes or not. No setup, no configuration, no magic, just save the damn object already.

Contrary to popular belief, disk IO is not the bottleneck, my time is the bottleneck. Computers are cheap, ram is cheap, disks are cheap, programmer's time is usually by far the largest expense on any project. Something simple that just works OK but solidly every time is far more useful to me than something complex that works really really well most of the time but still breaks in weird ways occasionally, forcing me to dig into someone else's complex code for change detection or topological insertion sorting and blow a week of programmer time working on god damn plumbing. I want to spend as much time as possible when programming working on my actual problem, not fighting with the persistence framework to get it to behave correctly or map my object correctly.

A Real Solution

Of course, GemStone is offering GLASS, a 4 gig persistent image that just magically solves all your problems. That will be the preferred option for persistence when you really need to scale in the Seaside world, and I for one will be using it when necessary; however, it does require a 64 bit server and introduces the small additional complexity of changing to an entirely different Smalltalk and learning its class library. Definitely an option if you outgrow Squeak. But will you? I'll get into GemStone more in another post when I can get more into it and give it the attention it deserves, but my main point now is that there's still a need for simple GemStone'ish like persistence for Squeak.

Reality Check

Let's be honest, most apps don't need to scale. Most apps in the real world are written to run small businesses, what DHH calls the fortune five million. The simple fact is, in all likelihood scaling is not and probably won't ever be your problem. We might like to think we're writing the next YouTube or Twitter, but odds are we're not. You can make a career just replacing spread sheets from hell with simple applications that make people lives easier without ever once hitting the limits of a single Squeak image (such was the inspiration for DabbleDb), so don't waste your time scaling.

You don't have a scaling problem unless you have a scaling problem. Even if you do have an app that needs to scale, it'll probably need 2 or 3 back end supporting applications that don't and it's a waste of time making them scale if they don't need too. If scaling ever becomes a problem, be happy, it's a nice problem to have unless you're doing something stupid like giving away all of your services for free and hoping you'll figure out that little money thing later on.

Conventions Rule

Ruby on Rails has shown us that beyond making things easier with ActiveRecord, things often need to be made more structured and less configurable. Configuration is a hidden complexity that Java has shown can kill any chance for any real productivity, sometimes having more configuration than actual code. It's amazing how much simpler programs can get if you just have the guts to make a few tough choices, decide how you want to do things, and always do it that way. Ruby on Rails true contribution to the programming community was its convention over configuration philosophy, ActiveRecord itself was in use long before Rails.

Convention over configuration is really just a nice way of the framework writer saying "This is how it's done and if you don't like it, tough." The problem then of course becomes finding a framework with conventions you agree with, but it's a big world, you're probably a programmer if you're reading this, so if you can't find something, write your own. The only problem with other people's frameworks, is that they're other people's frameworks. There's nothing quite like living in a world of your own creation.

What I Wanted

I wanted something like ActiveRecord from Rails but not mapped to a relational database, that I could use with Seaside and Squeak for small applications. I've accepted that if I need to scale, I'll use GemStone, this limits what I need from a persistence solution for Squeak.

For Squeak, I need a simple, fast, configuration free, crash proof, easy to use object database that doesn't require heavy thinking to use, optimize, or explain to others that allows me to build and iterate prototypes and small applications quickly without having to keep a schema in sync or stop to figure out why something isn't working, or why it's too slow to be usable.

I don't want any complex indexing schemes to be necessary, which means I want something like a prevalence system where all the objects are kept in memory all the time so everything is just automatically fast. I basically just want my classes in Squeak to be persistent and crash proof. I don't need a query language, I have the entire Smalltalk collections hierarchy at my disposal, and I sure as hell don't need SQL.

I also don't want a bunch of configuration. If I want to find all the instances of a User in memory I can simply say...

someUsers := User allInstances.

Without having to first go and configure what memory #allInstances will refer to because obviously I want #allInstances in the current image. After all, isn't a persistent image what we're really after to begin with? Don't we just want our persistent objects to be available to us as if they were just always in memory and the image could never crash? Shouldn't our persistent API be nearly as simple?

Since I'm basically after a persistent image, I don't need any configuration; the image is my configuration. It is my unit of deployment and I've already got one per app/customer anyway. I don't currently, nor do I plan on running multiple customers out of a single image so I can simply assume that when I persist an instance, it will be stored automatically in some subdirectory in the directory my image itself is in, overridable of course, but with a suitable default. If I want to host another instance of a particular database, I'll put another image in a different directory and fire it up.

And now I'm finally getting to the point...

SandstoneDb

Since I couldn't find anything that worked exactly the way I wanted, though Prevayler was pretty close, I just wrote my own. It's a simple object database that uses SmartRefStreams to serialize clusters of objects to disk. Ordinary ReferenceStreams can mix up your instance variables when deserializing older versions of a class.

The root of each cluster is an ActiveRecord / OODB hybrid. It makes ActiveRecord a bit more object oriented by treating it as an aggregate root and its class as a repository for its instances. I'm mixing and matching what I like from Domain Driven Design, Prevayler, and ActiveRecord into a single simple framework that suits me.

SandstoneDb API

To use SandstoneDb, just subclass SDActiveRecord and restart your image to ensure the proper directories are created, that's it, there is no further configuration. The database is kept in a subdirectory matching the name of the class in the same directory as the image. This is a Prevayler like system so all data is kept in memory written to disk on save; on system startup, all data is loaded from disk back into memory. This keeps the image itself small.

Like Prevayler, there's a startup cost associated with loading all the instances into memory and rebuilding the object graph, however once loaded, accessing your objects is blazing fast and you don't need to worry about indexing or special query syntaxes like you would with an on disk database. This of course limits the size of the database to whatever you're willing to put up with in load time and whatever you can fit in ram.

To give you a rough idea, loading up a 360 meg database containing about 73,000 hotel objects on my 3ghz Xeon Windows workstation takes about 57 minutes. That's an average of about 5k per object. Hefty and definitely pushing the upper limits of acceptable. Of course load time will vary depending upon your specific domain and the size of the objects. This blog is nearly two years old and only has a few hundred objects varying from 2k to 90k, some of my customers have been using their small apps for nearly a year and only accumulated 500 to 600 business objects averaging 0.5k each. Load time for apps this small is insignificant and using a relational database would be akin to using a sledge hammer to hang an index card with a thumb tack.

API

SandstoneDb has a very simple API for querying and iterating on the class side representing the repository for those instances:

queries

  • #atId: (for fetching a record by its #id)
  • #atId:ifAbsent:
  • #do: (for iterating all records)
  • #find: (for finding first matching record)
  • #find:ifAbsent:
  • #find:ifPresent:
  • #findAll (for grabbing all records)
  • #findAll: (for finding all matching record)

Being pretty much just variations of #select: and #detect:, little if any explanation is required for how to use these. The #find naming is to make it clear these queries could potentially be more expensive than just the standard #select: and #detect:.

Though it's memory based now, I'm leaving open the option of future implementations that could be disk based allowing larger databases than will fit in memory; the same API should work regardless.

There's an equally simple API for the instance side:

Accessors that come in handy for all persistent objects.

  • #id (a UUID string in base 36)
  • #createdOn
  • #updatedOn
  • #version (useful in critical sections to validate you're working on the version you expect)
  • #indexString (all instance variable's asStrings as a single string for easy searching)

Actions you can perform on a record.

  • #save (thread safe)
  • #save: (same as above but you can pass a block if you have other work you want done while the object is locked)
  • #critical: (grabs or creates a Monitor for thread safety)
  • #abortChanges (rollback to the last saved version)
  • #delete (thread safe)
  • #validate (for subclasses to override and throw exceptions to prevent saves)

You can freely have records holding references to other records but a record must be saved before it can be referenced. If you attempted to save an object that references another record that answers true to #isNew, you'll get an exception. Saves are not cascaded, only the programmer can know the proper save order his object model requires. To do safe cascaded saves would require actual transactions. Saves are always explicit, if you didn't save it, it wasn't saved, there is no magic, and you should never be left scratching your wondering if your objects were saved or not.

Events you can override to hook into a records life cycle.

  • #onBeforeFirstSave
  • #onAfterFirstSave
  • #onBeforeSave
  • #onAfterSave
  • #onBeforeDelete
  • #onAfterDelete

Be careful with these, if an exception occurs you will prevent the life cycle from completing properly, but then again, that might be what you intend.

A testing method you might find useful on occasion.

  • #isNew (answers true prior to the first successful save)

Only subclass SDActiveRecord for aggregate roots where you need to be able to query for the object, for all other objects just use ordinary Smalltalk objects. You DO NOT need to make every one of your domain objects into ActiveRecords, this is not Ruby on Rails, choosing your model carefully gives you natural transaction boundaries since the save of a single ActiveRecord and all ordinary objects contained within is atomic and stored in a single file. There are no real transactions so you can not atomically save multiple ActiveRecords.

A good example of an aggregate root object would an #Order class, while its #LineItem class just be an ordinary Smalltalk object. A #BlogPost is an aggregate root while a #BlogComment is an ordinary Smalltalk object. #Order and #BlogPost would be ActiveRecords. This allows you to query for #Order and #BlogPost but not for #LineItem and #BlogComment, which is as it should be, those items don't make much sense outside the context of their aggregate root and no other object in the system should be allowed to reference them directly, only aggregate roots can be referenced by other other objects.

This of course means should you improperly reference say an #OrderItem from an object other than its parent #Order (which is the root of the file they're bother stored in), then you'll ultimately end up referencing a copy rather than the original because such a reference won't be able to maintain its identity after an image restart.

In the real world, this is more than enough to write most applications. Transactions are a nice to have feature, they are not a must have feature and their value has been grossly oversold. Starbucks doesn't use a two phase commit, and it's good to remind yourself that the world chugs on anyway, mistakes are sometimes made and corrective actions are taken, but you don't need transactions to do useful work. MySql became the most popular open source database in existence long before they added transactions as a feature.

Here are some examples of using an ActiveRecord...

person := Person find: [ :e | e name = 'Joe' ].
person save.
person delete.
user := User find: [ :e | e email = 'Joe@Schmoe.com' ] ifAbsent: [ User named: 'Joe' email: 'Joe@Schmoe.com' ].
joe := Person atId: anId.
managers := Employee findAll: [ :e | e subordinates notEmpty ].

Concurrency is handled by calling either #save or #save: and it's entirely up to the programmer to put critical sections around the appropriate code. You are working on the same instances of these objects as other threads and you need to be aware of that to deal with concurrency correctly. You can wrap a #save: around any chunk of code to ensure you have a lock on that object like so...

auction save:[ auction addBid: (Bid price: 30 dollars user: self session currentUser) ].

While #critical: lets you decide when to call #save, in case you want other stuff inside the critical section of code to do something more complex than a simple implicit save. When you're working with multiple distributed systems, like a credit card processor, transactions don't really cut it anyway so you might do something like save the record, get the auth, and if successful, update the record again with the new auth...

auction critical: [ 
    [ auction
        acceptBid: aBid;
    save;
    authorizeBuyerCC;
        save ] 
     on: Error do: [ :error | auction reopen; save ] ]

That's about all there is to using it, there are some more things going on under the hood like crash recovery and startup but if you really want to know how that works, read the code. SandstoneDb is available on SqueakSource and is MIT licensed and makes a handy development and prototyping or small application database for Seaside. If you happen to use it and find any bugs or performance issues, please send me a test case and I'll see what I can do to correct it quickly.


          Small Scriptaculous API Change for Seaside 2.8   

Yesterday I was upgrading one of my applications to the latest version of Scriptaculous and Seaside 2.8, at first everything seemed to go OK but shortly thereafter I noticed that some of the Ajax in the application had stopped working. After a bit of testing I traced the problem to multi element Ajax updates where I'm using the evaluator. Stuff like this occasionally happens so it was time for some investigation.

I cracked open an older image and checked the version I'd been using and started reading the commit comments for each version looking for clues. You can do this from SqueakSource but I usually just do it in Monticello directly. After a bit of digging I find in Scriptaculous-lr.232.mcz the information I'm looking for, namely...

NOTE: SUElement>>#render: does not call #update: anymore, directly use #update:, #replace:, #insert:, and #wrap: now. These methods finally accept any renderable object (string, block, ...) and also encode the contents correctly.

Seems Lukas changed the API to make things more intention revealing. A quick trip through the app looking for evaluators and changing #render: to #update: and everything started working again. Having made the necessary changes, and looking at the new code for a few minutes, I liked it and agree with API change.

What I want to point out is the importance of good commit comments (thanks Lukas) that allow those who use your frameworks to work out their problems. Commit comments are the best place to share your thoughts about why you changed something or decided to go in a particular direction because they are, or should be, the first thing a developer reads before loading a new version of that code.

I also what to point out the process itself. Being open source code, when things go wrong it's often up to you to solve your own problems. Had I not found what I needed in the comments, I'd have started Googling and searching the archives of the Seaside-Dev list to see if anyone else had run into this issue. If that fails, then I'd post to Seaside-Dev asking for help.

There's not a lot of documentation on Seaside and Scriptaculous in comparison to some other frameworks, but there's plenty of help to be found with just a little bit of effort on your part to do your homework and a great community ready and willing to help you out when you need it. But always do your homework first, in case your question has been answered many times over.


          Arduino does Hard Science   

We don’t know why [stoppi71] needs to do gamma spectroscopy. We only know that he has made one, including a high-voltage power supply, a photomultiplier tube, and–what else–an Arduino. You also need a scintillation crystal to convert the gamma rays to visible light for the tube to pick up.

He started out using an open source multichannel analyzer (MCA) called Theremino. This connects through a sound card and runs on a PC. However, he wanted to roll his own and did so with some simple circuitry and an Arduino.

The tube detects very faint light in the crystal so they …read more


          uBlock Origin–An Ad Blocker for the Edge browser   
uBlock Origin is a free open source browser extension that does content filtering and ad blocking. This extension now is available for the Edge browser and is available in the Windows Store. Navigate to: https://www.microsoft.com/en-nz/store/p/ublock-origin/9nblggh444l4 and go from there!

To use it, you  just install it and away you go. It just works. I tried it on one particular site that has a lot of ads AND refuses to show any content if you are using and ad blocker.  Not only does that site not complain about using an ad blocker, but the extension does rather well so far as I can tell:



Nice! Especially if you have someone in the family with epilepsy!


          Facebook 更新在 Open Source 軟體裡的專利授權條款   
Facebook 的 Open Source 專案一般都採用 BSD licenses 放出,而由於 BSD licenses 並沒有專利授權,所以 Facebook 自己附帶專利授權條款讓使用者不用擔心在使用時侵犯到 Facebook 的專利。 而前陣子這個條款更新了:「Updating Our Open Source Patent Grant」,範例可以參考 osquery 裡的檔案:舊版的可以參考「PATENTS」這裡,而新版的可以參考「PATENTS」這裡,差異可以看「Update patent grant」這個 commit。 不過看起來還是不怎麼友善...
          This Week in Open Source News: Open Source Fridays, New Linux Foundation Project for Multi-Cloud Environments & More   
This week in open source and Linux news, GitHub takes their Friday enthusiasm beyond casual Friday in creating a weekly "Open Source Day", a new Linux Foundation Project was announced, and much more!
          DavidJames commented on Chris Anderson's blog post How do modern open source autopilots compare to aerospace-grade IMUs?   
DavidJames commented on Chris Anderson's blog post How do modern open source autopilots compare to aerospace-grade IMUs?

          Simple Cloud Storage   
Does anyone know of any simple, light-weight cloud sync server systems that are open source? I've had OwnCloud and NextCloud. I used their services for a while until i realized that after you lose...
          LXer: This Week in Open Source News: Open Source Fridays, New Linux Foundation Project for Multi-Cloud Environments & More   
Published at LXer: This week in open source and Linux news, GitHub takes their Friday enthusiasm beyond casual Friday in creating a weekly "Open Source Day", a new Linux Foundation Project was...
          Restyaboard – Uma alternativa open source ao Trello   
O Trello é uma das ferramentas para gerenciamento de atividades e colaboração de times mais difundidas do mundo, com usabilidade maravilhosa e bastante flexível, entretanto, não é uma ferramenta de código aberto, não possui integração com LDAP ou AD para login institucional e nem pode ser instalado em um servidor próprio, ou seja, sua organização deverá utilizá-lo na nuvem, o que é conhecido como Software as a Service. Em alguns ambientes controlados, com informações sensíveis, ou necessidades de login institucional, tais restrições são impeditivas para o uso do Trello. Com uma interface amigável e a capacidade de importar quadros, listas e cartões do Trello, o Restyaboard é uma alternativa que se apresenta para resolver tais limitações.

Enviado por Jonathan Maia (jonathanmaiaΘgmail·com)

O artigo "Restyaboard – Uma alternativa open source ao Trello" foi originalmente publicado no site BR-Linux.org, de Augusto Campos.


          Write an Android application by selvaprakash83   
I'm looking for Android App developers to create an App that can be used to scan specific document types and extract and store the text online in Google docs or something like that. I found some open source codes available that can be made use of for OCR purpose... (Budget: ₹37500 - ₹75000 INR, Jobs: Android, Mobile Phone)
          Write an Android application by selvaprakash83   
I'm looking for Android App developers to create an App that can be used to scan specific document types and extract and store the text online in Google docs or something like that. I found some open source codes available that can be made use of for OCR purpose... (Budget: ₹37500 - ₹75000 INR, Jobs: Android, Mobile Phone)
          Software Engineer - Bridgewater Associates - Westport, CT   
Aren’t a punch-the-clock coder — technology has always been pervasive in your life, from building drones to contributing to open source sites....
From Bridgewater Associates - Sun, 25 Jun 2017 06:53:27 GMT - View all Westport, CT jobs
          Software Developer - Bridgewater Associates - Westport, CT   
Aren’t a punch-the-clock coder — technology has always been pervasive in your life, from building drones to contributing to open source sites Possess high...
From Bridgewater Associates - Tue, 23 May 2017 10:21:21 GMT - View all Westport, CT jobs
          Write an Android application by selvaprakash83   
I'm looking for Android App developers to create an App that can be used to scan specific document types and extract and store the text online in Google docs or something like that. I found some open source codes available that can be made use of for OCR purpose... (Budget: ₹37500 - ₹75000 INR, Jobs: Android, Mobile Phone)
             
My session description for the Open Source conf is up.
          Senior Data Architect - Stem Inc - San Francisco Bay Area, CA   
Help design, develop and implement a resilient and performant distributed data processing platform using open source Big Data Technologies....
From Stem Inc - Tue, 27 Jun 2017 05:52:01 GMT - View all San Francisco Bay Area, CA jobs
          Going Reactive: Event-Driven, Scalable & Resilient Systems   

Learn how to building modern, scalable, reactive and resilient applications, ready for the real-time web.

The skills of building Event-Driven, Highly Concurrent, Scalable & Resilient Systems are becoming increasingly important in our new world of Cloud Computing, multi-core processors, Big Data and Real-Time Web.

Unfortunately, many people are still doing it wrong; using the wrong

tools, techniques, habits and ideas. In this talk we will look at what

it means to 'Go Reactive' and discuss some of the most common (and some not so common but superior) practices; what works - what doesn't work - and why.

Jonas Bonér
Jonas Bonér is a geek, programmer, speaker, musician, writer and Java Champion. He is the CTO and co-founder of Typesafe and is an active contributor to the Open Source community; most notably founded the Akka Project and the AspectWerkz AOP compiler (now AspectJ). Learn more at: jonasboner.com

Cast: JavaZone

Tags: JavaZone 2013 Jonas Bonér Arch, Big Data and NoSQL, Distributed systems and cloud and Enterprise


          Tizen installato su Nexus 7 3G grazie ad un porting    
Come molti sapranno, Tizen è un progetto di sistema operativo open source basato su Linux e sponsorizzato dalla Linux Foundation, nato tra la collaborazione tra Intel e Samsung. Esso nasce dalle...
          Ubuntu Tablet OS, l'annuncio tra 5 ore   
Non è ancora passato un mese da quando abbiamo sentito parlare in anteprima di Ubuntu phone OS, il nuovo sistema operativo open source per smartphone creato da Canonical, e oggi parliamo già di...
          Open Source For You – July 2017   
Open Source For You – July 2017

Open Source For You – July 2017

English | 108 pages | PDF | 106 MB



          How to replace PNG images by SVG in epub?   
I have been trying to substitute SVG images for PNG images in an epub off and on for three years (mostly off) with vey little (basically none) success. From time to time I learn or think of something to try, but I am now at the end of my rope. The Creative Commons licensed book "Pro Git" by Scott Chacon has PNG illustrations, some of which are screen shots and many of which are line art diagrams with text. Naturally enough for an open source book about git, the source materials are in a git repository https://github.com/progit/progit (a second edition has since been published). This is all fine, but the diagrams are tiny on high resolution screens. Iearned that the source for the diagrams is in dia format and get rendered into PNG. It turns out that dia can be rendered as SVG. So I unzipped the epub, deleted the diagram pngs, generated the svgs, change the
          [comp] My annual animosity toward ATO for their failure to provide open eTax   
I wonder if I should try installing etax on wine and see if I can do my tax return that way, or just fill it out on paper in my annual protest to the ATO (going on the assumption it must cost ATO more to process paper based tax returns than eTax based tax returns) that they still make people buy a broken non-free operating system of some description if they wish to fill in their tax return electronically. If I were not so concerned at the environmental waste of the idea and of the Australian government giving even more money to a US company for no good reason (Microsoft), I would have thought about buying a computer with windows installed for using it once a year simply to do tax, then claim the entire cost against tax. I hope by next year ATO may have caught up with the ABS (The Australian Census is completed online) in allowing someone with an open source web browser such as Firefox to do their tax on any platform.</rant>

I was happy to see some more information has been released, thanks to Andrew Donnellan about the underlying activity behind this lack of support.

          Temperature Monitoring with Spiceworks?   

Spiceworks does not gather temperature as far as I know and even if it did the alerts would only come during the scan. Per the http://community.spiceworks.com/help/Setting_Up_Monitors_And_Email_Alerts documentation

How your monitors are evaluated: Monitors are checked after your network is scanned. If a monitor's threshold is met, you will be notified.
The majority of monitors are evaluated after a scheduled network scan is completed. You can learn how to change the frequency of these scans or run a manual scan.
The exception is the online/offline condition for devices. These are scanned about every fifteen minutes. That way you can keep on top of offline machines.

We use another monitoring tool called Zenoss (open source) and then any machine that gives temperature (or anything else)via SNMP can be monitored and an alert can be...


          The Switch to Linux - Introduction   


I've had my eye on Microsoft’s windows 8 for a while now. Some of the ideas initially sounded quite promising; windows running on the ARM architecture along with a GUI that could work well on both the tablet and traditional PC form-factors. But as time went on, my excitement started to turn to worry. And after trying many of the developer previews and now the final releases, I can safely say I don't really like the new direction of windows. That's not say that there isn't some really great things about the windows 8 platform, there is, but it may not be suitable for everyone, especially those of us who might be considered "power users". This is why I've decided to begin making the switch to Linux. I want to make this transition from Windows to Linux a public and open experience, in the spirit of the open source concept and in the hope that my successes and failures may help others who are considering the switch as well.

Let me just quickly explain a little about myself. I've been working for a media company doing mostly graphic design and video production for the past four years. Some of my everyday duties include, creating advertisements, designing digital signs / menu boards, producing GUI elements for interactive displays as well as web design. I have also been doing a lot of programming and have created applications for the RCMP and a few educational institutions. I also wrote the digital signage application that my company now uses to deploy our product across North America. I am by no means an expert in the realm of programming, but I have taught myself enough to be fairly useful. I use dozens of windows applications everyday to do what I do, so this switch to Linux will most likely be a slow transition. In the meantime I will keep my Windows 7 system on hand until I no longer require it, which may be a while.

Although I have had my eye on many Linux distributions for a while I am a total noob and only really tinkered with a few so this blog will be coming from a new users point of view. In saying that I apologize to any Linux gurus who may be offended by my ignorance or terminology but... get over it. We've all got to start somewhere. I will try to talk a little bit about every step I take in setting up a Linux system and getting productive on it. Please feel free to comment and let me know if you have any tips or suggestions for a new Linux user that may have helped you in the beginning.

          Amanda Gelender-Social Impact & Open Source : How developers can drive change.   

Description

Open source technology is catalyzing social change across the globe. In this keynote session, Amanda Gelender, GitHub's Senior Manager of Social Impact, will discuss the importance of lowering barriers to entry in technology innovation. You’ll walk away with an understanding of how developers can utilize their skills by contributing to open source projects that are creating waves of powerful change in communities across the globe.


          Petya Not Really Ransomware - Open Source Programmer   


          Perl Developer -    
We are looking for Developers who are not afraid to experiment, think big or aim high! Based in the beautiful city of Amsterdam , our technology department is over 1800 people strong. We believe that diversity makes us stronger – with over 60 different nationalities in the technology department alone, you will be able to connect to inspiring colleagues and to absorb new skills and develop your career in a multicultural environment. B.responsible Our technical culture derives strongly from our strong ties to the Perl community. We appreciate what open source means both for our business and also for internal projects. Some of our most useful hacks inside the company have come from someone scratching the proverbial itch. As a Software Developer, you are responsible for the development, performance, and scaling of our public website as well as internal systems. And even if you don’t have Perl experience, your colleagues will help you get up to speed in order to help you solving real problems from day one. Important aspects of the job include: • Rapidly develop next-generation scalable, flexible, and high-performance systems • Solve issues with the site and internal systems, prioritizing based on customer impact • You will act as an intermediary for problems, with both technical and non-technical audiences • You will work independently and will also be responsible for making technical decisions within a team • Contribute to the growth of Booking.com through interviewing, on-boarding, or other recruitment efforts 
          New Semantic Publishing Benchmark Record   

There is a new SPB (Semantic Publishing Benchmark) 256 Mtriple record with Virtuoso.

As before, the result has been measured with the feature/analytics branch of the v7fasttrack open source distribution, and it will soon be available as a preconfigured Amazon EC2 image. The updated benchmarks AMI with this version of the software will be out there within the next week, to be announced on this blog.

On the Cost of RDF Query Optimization

RDF query optimization is harder than the relational equivalent; first, because there are more joins, hence an NP complete explosion of plan search space, and second, because cardinality estimation is harder and usually less reliable. The work on characteristic sets, pioneered by Thomas Neumann in RDF3X, uses regularities in structure for treating properties usually occurring in the same subject as columns of a table. The same idea is applied for tuning physical representation in the joint Virtuoso / MonetDB work published at WWW 2015.

The Virtuoso results discussed here, however, are all based on a single RDF quad table with Virtuoso's default index configuration.

Introducing query plan caching raises the Virtuoso score from 80 qps to 144 qps at the 256 Mtriple scale. The SPB queries are not extremely complex; lookups with many more triple patterns exist in actual workloads, e.g., Open PHACTS. In such applications, query optimization indeed dominates execution times. In SPB, data volumes touched by queries grow near linearly with data scale. At the 256 Mtriple scale, nearly half of CPU cycles are spent deciding a query plan. Below are the CPU cycles for execution and compilation per query type, sorted by descending sum of the times, scaled to milliseconds per execution. These are taken from a one minute sample of running at full throughput.

Test system is the same used before in the TPC-H series: dual Xeon E5-2630 Sandy Bridge, 2 x 6 cores x 2 threads, 2.3GHz, 192 GB RAM.

We measure the compile and execute times, with and without using hash join. When considering hash join, the throughput is 80 qps. When not considering hash join, the throughput is 110 qps. With query plan caching, the throughput is 145 qps whether or not hash join is considered. Using hash join is not significant for the workload but considering its use in query optimization leads to significant extra work.

With hash join

Compile Execute Total Query
3156 ms 1181 ms 4337 ms Total
1327 ms 28 ms 1355 ms query 01
444 ms 460 ms 904 ms query 08
466 ms 54 ms 520 ms query 06
123 ms 268 ms 391 ms query 05
257 ms 5 ms 262 ms query 11
191 ms 59 ms 250 ms query 10
9 ms 179 ms 188 ms query 04
114 ms 26 ms 140 ms query 07
46 ms 62 ms 108 ms query 09
71 ms 25 ms 96 ms query 12
61 ms 13 ms 74 ms query 03
47 ms 2 ms 49 ms query 02
       

Without hash join

Compile Execute Total Query
1816 ms 1019 ms 2835 ms Total
197 ms 466 ms 663 ms query 08
609 ms 32 ms 641 ms query 01
188 ms 293 ms 481 ms query 05
275 ms 61 ms 336 ms query 09
163 ms 10 ms 173 ms query 03
128 ms 38 ms 166 ms query 10
102 ms 5 ms 107 ms query 11
63 ms 27 ms 90 ms query 12
24 ms 57 ms 81 ms query 06
47 ms 1 ms 48 ms query 02
15 ms 24 ms 39 ms query 07
5 ms 5 ms 10 ms query 04

Considering hash join always slows down compilation, and sometimes improves and sometimes worsens execution. Some improvement in cost-model and plan-space traversal-order is possible, but altogether removing compilation via caching is better still. The results are as expected, since a lookup workload such as SPB has little use for hash join by nature.

The rationale for considering hash join in the first place is that analytical workloads rely heavily on this. A good TPC-H score is simply unfeasible without this as previously discussed on this blog. If RDF is to be a serious contender beyond serving lookups, then hash join is indispensable. The decision for using this however depends on accurate cardinality estimates on either side of the join.

Previous work (e.g., papers from FORTH around MonetDB) advocates doing away with a cost model altogether, since one is hard and unreliable with RDF anyway. The idea is not without its attraction but will lead to missing out of analytics or to relying on query hints for hash join.

The present Virtuoso thinking is that going to rule based optimization is not the preferred solution, but rather using characteristic sets for reducing triples into wider tables, which also cuts down on plan search space and increases reliability of cost estimation.

When looking at execution alone, we see that actual database operations are low in the profile, with memory management taking the top 19%. This is due to CONSTRUCT queries allocating small blocks for returning graphs, which is entirely avoidable.


          Virtuoso updated to version 7.2.1    

We're pleased to announce that Virtuoso 7.2.1 is now available, and includes various enhancements and bug fixes. Important additions include new support for xsd:boolean and TIMEZONE-less DATETIME & xsd:dateTime; and significantly improved compatibility with the Jena and Sesame Frameworks.

New product features as of June 24, 2015, v7.2.1, include:

  • Virtuoso Engine

    • Added support for TIMEZONE-less xsd:dateTime and DATETIME
    • Added support for xsd:boolean
    • Added new text index functions
    • Added better handling of HTTP status codes on SPARQL graph protocol endpoint
    • Added new cache for compiled regular expressions
    • Added support for expression in TOP/SKIP
  • SPARQL

    • Added support for SPARQL GROUPING SETS
    • Added support for SPARQL 1.1 EBV (Efficient Boolean Value)
    • Added support for define input:with-fallback-graph_uri
    • Added support for define input:target-fallback-graph-uri
  • Jena & Sesame Compatibility

    • Added support for using rdf_insert_triple_c() to insert BNode data
    • Added support for returning xsd:boolean as true/false rather than 1/0
    • Added support for maxQueryTimeout in Sesame2 provider
  • JDBC Driver

    • Added new methods setLogFileName and getLogFileName
    • Added new attribute "logFileName" to VirtuosoDataSources for logging support
  • Faceted Browser

    • Added support for emitting HTML5+Microdata instead of RDFa as default HTML page
    • Added query optimizations
    • Added new footer icons to /describe page
  • Conductor and DAV

    • Added support for VAD dependency tree
    • Added support for default vdirs when creating new listeners
    • Added support for private RDF graphs
    • Added support for LDP in DAV API
    • Added option to create shared folder if not present
    • Added option to enable/disable DET graphs binding
    • Added option to set content length threshold for asynchronous sponging
    • Added folder option related to .TTL redirection
    • Added functions to edit turtle files
    • Added popup dialog to search for unknown prefixes
    • Added registry option to add missing prefixes for .TTL files
More details of the additions, fixes, and other changes in this update of both Open Source and Commercial Editions, may be found on the Virtuoso News page. Additional Information:
          In Hoc Signo Vinces (part 21 of n): Running TPC-H on Virtuoso Elastic Cluster on Amazon EC2   

We have made an Amazon EC2 deployment of Virtuoso 7 Commercial Edition, configured to use the Elastic Cluster Module with TPC-H preconfigured, similar to the recently published OpenLink Virtuoso Benchmark AMI running the Open Source Edition. The details of the new Elastic Cluster AMI and steps to use it will be published in a forthcoming post. Here we will simply look at results of running TPC-H 100G scale on two machines, and 1000G scale on four machines. This shows how Virtuoso provides great performance on a cloud platform. The extremely fast bulk load — 33 minutes for a terabyte! — means that you can get straight to work even with on-demand infrastructure.

In the following, the Amazon instance type is R3.8xlarge, each with dual Xeon E5-2670 v2, 244G RAM, and 2 x 300G SSD. The image is made from the Amazon Linux with built-in network optimization. We first tried a RedHat image without network optimization and had considerable trouble with the interconnect. Using network-optimized Amazon Linux images inside a virtual private cloud has resolved all these problems.

The network optimized 10GE interconnect at Amazon offers throughput close to the QDR InfiniBand running TCP-IP; thus the Amazon platform is suitable for running cluster databases. The execution that we have seen is not seriously network bound.

100G on 2 machines, with a total of 32 cores, 64 threads, 488 GB RAM, 4 x 300 GB SSD

Load time: 3m 52s
Run Power Throughput Composite
1 523,554.3 590,692.6 556,111.2
2 565,353.3 642,503.0 602,694.9

1000G on 4 machines, with a total of 64 cores, 128 threads, 976 GB RAM, 8 x 300 GB SSD

Load time: 32m 47s
Run Power Throughput Composite
1 592,013.9 754,107.6 668,163.3
2 896,564.1 828,265.4 861,738.4
3 883,736.9 829,609.0 856,245.3

For the larger scale we did 3 sets of power + throughput tests to measure consistency of performance. By the TPC-H rules, the worst (first) score should be reported. Even after bulk load, this is markedly less than the next power score due to working set effects. This is seen to a lesser degree with the first throughput score also.

The numerical quantities summaries are available in a report.zip file, or individually --

Subsequent posts will explain how to deploy Virtuoso Elastic Clusters on AWS.

In Hoc Signo Vinces (TPC-H) Series


          Introducing the OpenLink Virtuoso Benchmarks AMI on Amazon EC2   

The OpenLink Virtuoso Benchmarks AMI is an Amazon EC2 machine image with the latest Virtuoso open source technology preconfigured to run —

  • TPC-H , the classic of SQL data warehousing

  • LDBC SNB, the new Social Network Benchmark from the Linked Data Benchmark Council

  • LDBC SPB, the RDF/SPARQL Semantic Publishing Benchmark from LDBC

This package is ideal for technology evaluators and developers interested in getting the most performance out of Virtuoso. This is also an all-in-one solution to any questions about reproducing claimed benchmark results. All necessary tools for building and running are included; thus any developer can use this model installation as a starting point. The benchmark drivers are preconfigured with appropriate settings, and benchmark qualification tests can be run with a single command.

The Benchmarks AMI includes a precompiled, preconfigured checkout of the v7fasttrack github repository, checkouts of the github repositories of the benchmarks, and a number of running directories with all configuration files preset and optimized. The image is intended to be instantiated on a R3.8xlarge Amazon instance with 244G RAM, dual Xeon E5-2670 v2, and 600G SSD.

Benchmark datasets and preloaded database files can be downloaded from S3 when large, and generated as needed on the instance when small. As an alternative, the instance is also set up to do all phases of data generation and database bulk load.

The following benchmark setups are included:

  • TPC-H 100G
  • TPC-H 300G
  • LDBC SNB Validation
  • LDBC SNB Interactive 100G
  • LDBC SNB Interactive 300G (SF3)
  • LDBC SPB Validation
  • LDBC SPB Basic 256 Mtriples (SF5)
  • LDBC SPB Basic 1 Gtriple

The AMI will be expanded as new benchmarks are introduced, for example, the LDBC Social Network Business Intelligence or Graph Analytics.

To get started:

  1. Instantiate machine image ami-eb789280 (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs"; this one is short-named virtuoso-bench-6) with a R3.8xlarge instance.

  2. Connect via ssh.

  3. See the README (also found in the ec2-user's home directory) for full instructions on getting up and running.


          The Virtuoso Science Library   

There is a lot of scientific material on Virtuoso, but it has not been presented all together in any one place. So I am making here a compilation of the best resources with a paragraph of introduction on each. Some of these are project deliverables from projects under the EU FP7 programme; some are peer-reviewed publications.

For the future, an updated version of this list may be found on the main Virtuoso site.

European Project Deliverables

  • GeoKnow D 2.6.1: Graph Analytics in the DBMS (2015-01-05)

    This introduces the idea of unbundling basic cluster DBMS functionality like cross partition joins and partitioned group by to form a graph processing framework collocated with the data.

  • GeoKnow D2.4.1: Geospatial Clustering and Characteristic Sets (2015-01-06)

    This presents experimental results of structure-aware RDF applied to geospatial data. The regularly structured part of the data goes in tables; the rest is triples/quads. Furthermore, for the first time in the RDF space, physical storage location is correlated to properties of entities, in this case geo location, so that geospatially adjacent items are also likely adjacent in the physical data representation.

  • LOD2 D2.1.5: 500 billion triple BSBM (2014-08-18)

    This presents experimental results on lookup and BI workloads on Virtuoso cluster with 12 nodes, for a total of 3T RAM and 192 cores. This also discusses bulk load, at up to 6M triples/s and specifics of query optimization in scale-out settings.

  • LOD2 D2.6: Parallel Programming in SQL (2012-08-12)

    This discusses ways of making SQL procedures partitioning-aware, so that one can, map-reduce style, send parallel chunks of computation to each partition of the data.

Publications

2015

  • Minh-Duc, Pham, Linnea, P., Erling, O., and Boncz, P.A. "Deriving an Emergent Relational Schema from RDF Data," WWW, 2015.

    This paper shows how RDF is in fact structured and how this structure can be reconstructed. This reconstruction then serves to create a physical schema, reintroducing all the benefits of physical design to the schema-last world. Experiments with Virtuoso show marked gains in query speed and data compactness.

2014

2013

2012

  • Orri Erling: Virtuoso, a Hybrid RDBMS/Graph Column Store. IEEE Data Eng. Bull. (DEBU) 35(1):3-8 (2012)

    This paper introduces the Virtuoso column store architecture and design choices. One design is made to serve both random updates and lookups as well as the big scans where column stores traditionally excel. Examples are given from both TPC-H and the schema-less RDF world.

  • Minh-Duc Pham, Peter A. Boncz, Orri Erling: S3G2: A Scalable Structure-Correlated Social Graph Generator. TPCTC 2012:156-172

    This paper presents the basis of the social network benchmarking technology later used in the LDBC benchmarks.

2011

2009

  • Orri Erling, Ivan Mikhailov: Faceted Views over Large-Scale Linked Data. LDOW 2009

    This paper introduces anytime query answering as an enabling technology for open-ended querying of large data on public service end points. While not every query can be run to completion, partial results can most often be returned within a constrained time window.

  • Orri Erling, Ivan Mikhailov: Virtuoso: RDF Support in a Native RDBMS. Semantic Web Information Management 2009:501-519

    This is a general presentation of how a SQL engine needs to be adapted to serve a run-time typed and schema-less workload.

2008

2007

  • Orri Erling, Ivan Mikhailov: RDF Support in the Virtuoso DBMS. CSSW 2007:59-68

    This is an initial discussion of RDF support in Virtuoso. Most specifics are by now different but this can give a historical perspective.


          IT Developer/Architect - International Software systems - Maryland City, MD   
Proficiencies in DevOps*. This individual must be well versed in DevOps using industry standards and open source resources.*....
From Indeed - Thu, 29 Jun 2017 18:32:30 GMT - View all Maryland City, MD jobs
          SSD Advisory – Odoo CRM Code Execution   
Vulnerability Summary The following advisory describe arbitrary Python code execution found in Odoo CRM version 10.0 Odoo is a suite of open source business apps that cover all your company needs: CRM, eCommerce, accounting, inventory, point of sale, project management, etc. Odoo’s unique value proposition is to be at the same time very easy to … Continue reading SSD Advisory – Odoo CRM Code Execution
          08×03 Aplicaciones c# multiplataforma con Xamarin y Pablo Escribano   
Sobre Pablo Escribano: Pablo es analista en tecnologías .Net/Mono.Ha sido Secretario de la Oficina de Software Libre la Universidad de Huelva y actualmente coordina Mono Hispano. Está involucrado en el proyecto Cosificando, una plataforma libre para el tratamiento e impresión de objetos en tres dimensiones. Guión: “Efecto Xamarin“ Aplicaciones móbiles Open source Alternativas (PhoneGap) Historía […]
             
A picture named pepsiblue.gifKudos to Fawcette, one of the smartest people in the business. No I'm not attacking open source, and I participate in open source myself, more than most of the advocates. But I am also aware of the hypocrisy of venture capitalists and IBM execs, who take home millions of dollars a year in compensation, and expect programmers to work for love and no money. It pisses me off that they get away with such excessive greed, and that my fellow programmers sell out so cheap. Programmers have to have health insurance, send their kids to good schools, make mortgage payments, and retire someday. And these days they have to hire lawyers to defend themselves against the lawyers of the big companies. It's romantic to think of programmers working just for the approval of their peers. Sure it's nice to get approval. It's even nicer to get approval and get paid for your work.
          GraphStudioNext 0.7.1.28   
GraphStudioNext is a DirectShow graph editor. It's designed as an alternative to Microsoft Graph Edit in the Windows SDK with many additional features. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 4.17 MB ]
          All your computers are belong to us: the dystopian future of security is now   

Alon is contemplating replacing his laptop so I figured I would recommend he take a look at Purism, a company offering laptops that are designed for people that care about security and privacy.

Unfortunately, once I started looking a bit more closely at this little rabbit it ran deep down into its little rabbit hole and I discovered that in reality there are currently very very few hardware options for people that want a computer that is not backdoored with a sophisticated rootkit at the hardware level.

I followed the Snowden revelations closely and even read Grenn Greendwald's "No Place to Hide", but still the extent of this was news to me. Apparently after 911 an NSA program called "Sentry Owl" successfully coerced major US PC companies into co-designing hardware level rootkits into their products.

By 2006 the new generation of Intel hardware came with Intel ME ("Management Engine"), the secret computer within your computer pre-installed.

The ME has a full network stack with its own MAC that works even when your computer is turned off and has direct access to RAM and you all hard drives / peripherals. It's a 5MB proprietary encrypted blackbox that was designed to be extensible while being extremely hard to reverse engineer. The ME CPU runs its own custom non-x86 instruction set (ARC), the firmware is compressed with a custom designed compression algorithm, and all code is signed and encrypted. Intel is extremely uncooperative with anyone that wants details on how this thing works, including big customers like Google.

If you wanted to design a universal hardware backdoor that is embedded into all PCs this is how you would do it.

The people who seem to know the most about Intel ME outside of the intelligence community are the free software "nuts" attempting to develop a free (free as in free speech) boot process:

https://libreboot.org/faq/#intel

Unfortunately, the latest generation of AMD hardware (post-2013) has its own version of Intel ME called the AMD PSP (Platform Security Processor) which isn't any better:

https://libreboot.org/faq/#amd

For people that want a computer that isn't backdoored at the hardware level libreboot recommends not using modern hardware at all. Yikes!

Intel ME and the AMD PSP have the NSA's fingerprints all over it. I would be very very surprised if it turned out NOT to be designed (or at least co-designed) with the concerns of US intelligence capabilities in mind.

Unfortunately, that's a problem even if you trust the NSA not to abuse their powers, because  as one 29-year old former NSA contractor armed with a thumbdrive showed - the NSA's security isn't all that great.

Even those who think it's wise to trust the NSA would probably think twice about trusting the legions of private contractors it depends on to run its mass warrantless surveillance programs.

Even worse, according to experts like Bruce Schneier the game of cyber-espionage is all offense, no defense. In other words, foreign intelligence agencies most likely already had all the documents Snowden leaked because they were already in the NSA's systems.

So now you also have to trust not just the NSA, but the Russian FSB, the Chinese Cyberarmy, and potentially anyone working for them in past, present and future.

Now I get why the Chinese are developing their own CPUs, why the Russians and Germans are reverting to typewriters and paper for classified information, and what a top US intelligence officials means when he says:

I know how deep we are in our enemies's networks without them having any idea that we're there. I'm worried that our networks are penetrated just as deeply

The only saving grace is that given the risk of detection, political fallout and attack devaluation, I reckon advanced attackers regard hardware level backdoors as the tools of last resort and only against high-value targets. For the little guys, they'll prefer plausibly deniable exploits in endpoint software that were either accidentally or maliciously inserted. And yes, part of Sentry Owl and similar programs by other intelligence agencies involves inserting undercover agents into private companies and presumably into open source projects like Debian and Ubuntu as well.

Bottom line: options for a someone who wants a computer and get reasonable assurance that it cannot be remotely controlled at the hardware level when connected to the Internet are virtually non-existent.

You can raise the bar a little bit without sacrificing too much comfort with products like those from Purism:

https://puri.sm/products/

Features I like:

  • No binary blob drivers (which I'm certain are ALL backdoored)
  • hardware cut-off switches for RF, wireless and camera
  • Qubes OS certified / pre-installation option

https://www.qubes-os.org/news/2015/12/09/purism-partnership/

Stuff I don't like:

Possibly the closest thing you can get to a free computer at the hardware and software level is by buying old refurbished hardware directly from the libreboot guys:

https://minifree.org/

Unfortunately, you'll need to pay dearly for freedom. The laptop hardware was cutting edge in 2008. The server/workstation board is better since it took AMD longer to get on the backdoor bandwagon.

Also, given the well established practice of intercepting hardware in-route to install implants, if you don't have the skills to inspect hardware yourself, you can you know supposedly clean hardware hasn't been tampered with en route?

Paranoia, justified or not, is a tough hobby.


          Why Developers Now Compare Vue.js to JavaScript Giants Angular and React?   

Vue.js, an MIT-licensed open source project, is a JavaScript library for building web interfaces. The library was first released in 2013, but not many developers took cognizance of it in web framework technology for the next two years.

The post Why Developers Now Compare Vue.js to JavaScript Giants Angular and React? appeared first on designrfix.com.


          About the Emerging Battles Over Textbooks: Options from Apple to Open Initiatives   
Two dramatically opposed announcements from Apple and the state of California put the textbook publishing industry on notice recently that it could be facing rapid disruption. But open textbooks can't be created and altered as easily as open source software.
          Application Developer - FC USA Inc - Montvale, NJ   
Java/J2EE experience (5 to 7 years) with strong understanding of OO with Open Source Frameworks including Hibernate, Spring, DI, Apache CXF etc., and have...
From FC USA - Thu, 08 Jun 2017 21:04:56 GMT - View all Montvale, NJ jobs
          Red Hat unveils open source hyperconverged infrastructure   
Red Hat (https://www.redhat.com/en) has introduced its production-ready open source hyperconverged infrastructure. Red Hat Hyperconverged Infrastructure uses Red Hat's virtualisation platform, as...
          Linux Foundation launches a new project to secure software-defined networks   
security

The Linux Foundation has announced the Open Security Controller (OSC) project. The new project is a software-defined orchestration solution for multi-cloud environments. Software-defined networks ...

The post Linux Foundation launches a new project to secure software-defined networks appeared first on Open Source For You.


          Minor differences between C and C++   
The Minor Differences Between C and C++

This article discusses the subtle and minute differences between C and C++. An article titled ‘Major Differences Between C and C++’ would have been ...

The post Minor differences between C and C++ appeared first on Open Source For You.


          Web Robots: The worker bees of Internet   
Web Robots: The worker bees of Internet

Web robots, also known as Web crawlers and Web spiders, traverse the Internet to extract various types of information. Web robots can be used ...

The post Web Robots: The worker bees of Internet appeared first on Open Source For You.


          Era of cloud computing grows bigger in 2017   
cloudflare

Cloud computing is critical to the growth of a digital content business. The public cloud services market in India is projected to grow 38 ...

The post Era of cloud computing grows bigger in 2017 appeared first on Open Source For You.


          How to build a smart attendance register in App Inventor 2   
How to build a smart attendance register in App Inventor 2

App Inventor is a visual block-building language for creating Android apps. Over the past few months, we have been developing simple Android apps through ...

The post How to build a smart attendance register in App Inventor 2 appeared first on Open Source For You.


          Conda: The soul of Anaconda   
Conda: The soul of Anaconda

Conda, which is included in Anaconda and Miniconda, is an open source package management system and environment management system for installing multiple versions of ...

The post Conda: The soul of Anaconda appeared first on Open Source For You.


          Developing a basic GUI application using JavaFX in Eclipse   
Developing a basic GUI application using JavaFX in Eclipse

This tutorial takes readers through the process of developing a basic GUI application using JavaFX in Eclipse, and is simple enough for even a ...

The post Developing a basic GUI application using JavaFX in Eclipse appeared first on Open Source For You.


          Sign up with Ubidots to power your IoT app   
Sign up with Ubidots to power your IoT app

Ubidots is a hosted IoT platform in the cloud that can help to jumpstart your IoT application. It was created in a startup accelerator, ...

The post Sign up with Ubidots to power your IoT app appeared first on Open Source For You.


          DevOps series: Ansible deployment of RabbitMQ   
DevOps series: Ansible deployment of RabbitMQ

RabbitMQ, which is free and open source, is the world’s most widely deployed message broker. It is used by several big companies like Ford, ...

The post DevOps series: Ansible deployment of RabbitMQ appeared first on Open Source For You.


          New Windows Azure Services Unleashed!   
We have recently announced a major set of updates to Windows Azure. Windows Azure Web Sites – This feature makes building .NET, Node.js, Java and PHP web experiences easier and supports Git and FTP deployment techniques.  What’s even cooler is support for popular open source web applications like WordPress, Joomla!, DotNetNuke, Umbraco and Drupal Windows Azure Virtual Machines – This feature enables customers to move existing application workloads that currently reside on virtual hard disks (VHDs) between on-premises environments and...
          #1 MBRFilter, ayuda a proteger la MBR contra el malware   

Herramienta open source que protege tu MBR para que no pueda escribir, por ejemplo, GRUB. Paradójicamente genial.

» autor: Cachisen


          How to remove DarkKomet ransomware virus from system and infected programs   
Keep Your PC Safe from DarkKomet ransomware Virus,Malware and Ransomware DarkKomet ransomware is regarded as dubious application that is used as cryptovirus. It is based on the code of Hidden Tear Open Source Project. Some malware researchers have found that the samples of it recently determined that it also has a RAT components. The Cryptovirus [...]
          Public Resource liberates "Life in the UK" book, building codes   


Rogue archivist Carl Malamud sez,

Public.Resource.Org has always been a strong supporter of British-American cooperation. In order to further what Winston Churchill so aptly dubbed our “Special Relationship,” I'm happy to announce two hands across the sea.

If you would like to be a citizen of the United Kingdom, you need to study a book called Life in the UK. The book is published by Her Majesty's Stationery Office, which is part of the amazingly well run National Archives. These are the folks that run legislation.gov.uk, the best legislative reference site in the world. Life in the UK has the kind of open license one has come to expect for government information, so we asked our friends at the Rural Design Cooperative to take a stab at creating an open version. They totally went to town, replacing the commercial stock photos with open artwork, creating much better navigation across the book, study guide, and tests, and making the tests better, and (of course!) publishing the whole thing as valid html and open source so you can fork it if you'd like and create your own version. Thanks to Oliver Morley, the Archivist of the United Kingdom, for enabling open publishing and to the folks at the Rural Design Cooperative for creating the new version. You can read the all new Open Life in the UK here.

I'm sorry to report that another agent of the UK government, the British Standards Institution, apparently didn't get the open government memo. As you know, we've posted a bunch of crucial public safety standards from the UK as well as the rest of Europe and the world. Well, the British Standards Institution decided that they didn't like the fact that we posted a copy of BS 8300:2009+A1, which is the “Design of Buildings and Their Approaches to Meet the Needs of Disabled People” which we have on our site and on the Internet Archive. They sent us a DMCA takedown notice. We sent them a strongly-worded 4-page answer and that answer is NFW. You can read all the traffic back and forth with the standards people on our docket of RFCs.

The "Open Life in the UK" that Public Resource put together is much better than the study guide I used when I was becoming a British citizen. On behalf of all migrants to Britain, thank you, Public Resource!

Open Life in the UK (Thanks, Carl!)

          Wikiasari – The New Rival to Major Search Engines in 2007?   
Wikipedia which comes in Search engine result pages almost all the time in the top 5 results has come with a new search engine called WIKIASARI to compete against Major search engines. It is  a project to create a search engine. Add a site Browse keywords Search the index Search this wiki In 2003, Wikia began to develop an open source search engine with user-editable search results. The search index is free, both in terms of cost and freedom. […]
          MyRepublic Implements Unified Private Cloud to Enhance Infrastructure Manageability   
Learn how to achieve greater infrastructure flexibility and scalability with a unified private cloud that delivers the demands of a production-scale environment on open source software. Learn how your organization can cut time to market for new services, reduce hardware costs, unify fragmented infrastructure, and more. Published by: Red Hat
          OpenStack in Support of Public Cloud   
Discover OpenStack's integral role in delivering IT orchestration for public cloud integration, along with on-premises private cloud. Access to learn from two organizations how to build open source public cloud environments by partnering with the right trusted, managed provider, and more. Published by: Red Hat
          Monitoring open source software key for DevOps shops   
Open source software may be all the rage right now as the DevOps movement advances, but it's important to keep track of it carefully for licensing and security purposes. Continue reading this eGuide for information on how to accurately track your open source software, and the steps you should take to avoid licensing fees. Published by: Sonatype
          VPS for open source project   
I'm developing a free open source project that's nearing complete and it's open source, and we're trying to find a home to host it. Any recommendations? It uses laravel, NodeJS with socket.io - Source: www.lowendtalk.com
          Microsoft Is Not Open Source And Therefore Irrelevant?   

Originally posted on: http://ferventcoder.com/archive/2013/08/07/microsoft-is-not-open-source-and-therefore-irrelevant.aspx

I saw a YouTube video that when you boiled it down said that since Microsoft isn’t predominantly an open source company that they are irrelevant.

The speaker strikes me as someone who doesn’t live in the real world.  I rarely see a client that is exclusively Microsoft technology especially after a number of mergers.  I have worked often on projects that used Web Sphere long side IIS, DB2 and Oracle along with SQL Server, and open source tools along with purchased frameworks.

Of course we can always go down the argument that open source does not mean free or stable.  Of course you have the source code to fix bugs in open source because you have access to the code, but now you not only have to be an expert in your business code but also all of the cookie cutter framework code.  That is very expensive and can seriously delay projects.  I also don’t find that open source code is any more or less stable than packaged code.

As a developer I have found that Microsoft’s tools are much better than those that I have seen in other development spaces.  They make development much more efficient and repeatable.  Are they perfect?  Not by a long shot.  Do many of the Microsoft tools and products leave much to be desired? Sure!  But so do open source tools, languages and products.  If you think anything is a silver bullet then you are deluding yourself.

Irrelevant? As a generalist in the IT space for the last 20+ years I have found that every technology and platform has its place.  Don’t discount anything off hand.  Learn as many different languages, platforms and tools as you can without losing your sanity and understand where they give you the most benefit.  I feel that is the most responsible way off approaching technology.  Let’s stop being so absolute with our determinations of technologies and approaches.


          AEX800P 8FXO Modelo B   

AEX800P 8FXO Modelo B 

 


 

O que é?

A placa AEX800P 8FXO Modelo B é uma PCI 2.2 que suporta módulos FXO para conectar telefones analógicos e linhas analógicas (PSTN) por meio de um PC.

AEX800P 8FXO Modelo B é uma placa PCI para Asterisk, Trixbox, Elastix, Snep, Disc-os, FreePBX e outros projetos de Telefonia Open Source.

AEX800P 8FXO Modelo B é totalmente compatível com todos os Digium analógicos e outras placas analógicas e módulos sem mudanças de drivers.

O AEX800P 8FXO Modelo B utiliza drivers Dahdi ou Zaptel.

Módulos FXO do AEX800P 8FXO Modelo B são usados para conectar linhas telefônicas analógicas existentes em seu sistema telefônico.

Com a placa AEX800P 8FXO Modelo B e o software Open Source Asterisk PBX mais um PC padrão, os usuários podem criar um ambiente de telefonia Small OfficeHome Office (SOHO), que inclui todas as características sofisticadas de um PBXIP de alto desempenho com plataforma de correio de voz, URA, Conferência e DAC.

Pode-se ainda realizar entroncamento com interfaces celulares permitindo assim uma maior ecônomia em seu sistema de telefonia.

Acesse o blog da Lojamundi e saiba tudo sobre Placas Asterisk

 
TDM800P-8FXO-Modelo-B

Versatilidade

A placa AEX800P 8FXO Modelo B é uma PCI 2.2 que suporta módulos FXO para conectar telefones analógicos e linhas analógicas (PSTN) através de um PC.

TDM800P-8FXO-Modelo-B


Praticidade

A placa AEX800P 8FXO Modelo B é totalmente compatível com todos os Digium analógicos e outras placas analógicas e módulos sem mudanças de drivers. 


 
 

Especificações Técnicas

Placa-mãe:
 

AEX800P

Módulos:
4 (quatro) módulos FXO com 2 (dois) canais cada. Totalizando 8 (oito) canais de comunicação.
Exigência de hardware:
500 Mhz Pentium III ou superior com 64MB RAM
 
Disponível:
Disponível para slot PCI 5v e 3.3v
 

 

 

Características

Escalável e eficaz solução SOHO
Gateway de terminação para telefones analógicos
Conexão celular analógica para PBXs existentes
Wireless Ponto a Ponto-Applications entre Servidores Asterisk
Identificador de Chamadas e Chamada em Espera.
Telefones ADSI
RJ-11C Connector

 

AEX800P-8FXO-Modelo-B

AEX800P-8FXO-Modelo-B


R$1.510,55

          Application Developer - FC USA Inc - Montvale, NJ   
Java/J2EE experience (5 to 7 years) with strong understanding of OO with Open Source Frameworks including Hibernate, Spring, DI, Apache CXF etc., and have...
From FC USA - Thu, 08 Jun 2017 21:04:56 GMT - View all Montvale, NJ jobs
          Islandia se pasa al Open Source   
Todas las administraciones públicas de Islandia están incrementando su uso de Software Libre y software Open Source. De hecho, el gobierno de este país lanzó recientemente un proyecto por el cual todas sus instituciones públicas migrarán a este tipo de[...]
          jQuery Datatable in MVC … extended.   

Originally posted on: http://blog.davidbarrett.net/archive/2011/03/05/jquery-datatable-in-mvc-hellip-extended.aspx

There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!!

Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there.

You potentially spend a long time getting everything hooked together when you just don’t need it.

That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality).

What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc. 

Take a look here… http://www.datatables.net

I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require. 

Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it!

One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already…

I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out.

So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event.

Initialising my dataTable ( s )…

  1. $(document).ready(function () {
  2.     var dTable = $(".tdata").dataTable({
  3.         "bPaginate": false,
  4.         "bLengthChange": false,
  5.         "bFilter": true,
  6.         "bSort": true,
  7.         "bInfo": false,
  8.         "bAutoWidth": true,
  9.         "sScrollY": "400px"
  10.     });

My extension to the API to give me the resizing….

 

  1. // **********************************************************************
  2. // jQuery dataTable API extension to resize grid and adjust column sizes
  3. //
  4. $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) {
  5.     var id = oSettings.nTable.id;
  6.     var dt = $("#" + id);
  7.     var top = dt.position().top;
  8.     var winHeight = $(document).height();
  9.     var remain = (winHeight - top) - 83;
  10.     dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;");
  11.     this.fnAdjustColumnSizing();
  12. }

 

This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later!

You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required.

Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height.

The –83 is to give me the space at the bottom i require for fixed footer!

 

Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.

 

  1. $(document).ready(function () {
  2.         var oTable;
  3.         $("#tabs").tabs({
  4.             "show": function (event, ui) {
  5.                 oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable();
  6.                 if (oTable.length > 0) {
  7.                     oTable.fnSetHeightToBottom();
  8.                 }
  9.             }
  10.         });
  11.         $(window).bind("resize", function () {
  12.             oTable.fnSetHeightToBottom();
  13.         });
  14.     });

And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful.

I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.


          Mats Lundälv - Open Accessibility Everywhere – Presenting the AEGIS Project   

These are the slides from this presentation: FSCONS 2011 - From the track: Universal Design — Aiming for Accessibility Presentation by Mats Lundälv The AEGIS project applies a comprehensive and holistic framework approach to providing generalised access to mainstream ICT. It focusses on contributing to infrastructures and open standards to support developers in delivering accessible solutions. The ambitions cover rich Internet applications and mobile devices, in addition to the desktop, as well as a wide range of impairing conditions. This session will outline where AEGIS currently stands when entering the final fourth year of the project period. The Open Accessibility Framework (OAF) will be presented, as well as the range of components and sample applications that are being developed, most of them free and open source. Short demos will be given of the local Swedish developments in the area of multi-modal language support – helping to communicate, read and write with the help of the Concept Coding Framework (CCF) and graphic symbol representatio
          Ester Ytterbrink - FOSS for crips   

Ester Ytterbrink’s talk on "FOSS for Crips" - free software for accessibility at the track "Universal Design — Aiming for Accessibility" on FSCONS 2011. With a license to live — FOSS for crips by Ester Ytterbrink We all know the benefits and limitations of FOSS. How can we apply these to software accessibility tools? Why should there be more FOSS software for people with disabilities? Why are not all accessibility tools FOSS? What can crips give back to the FOSS community? My master thesis "FOSS för funkisar" was an investigation and exploration of the perceptions of FOSS software among people who work with and/or use accessibility tools. I will use this as a foundation as I reason around these questions. My wish is to challenge the relation between the producer, consumer and financier of accessibility tools. I want to show how FOSS can be used to change the way some of the people who are most dependent of their computers think of open source and free software.
          Appleseed Social Networking   

Presentation on the Appleseed Open Source, Distributed Social Networking Framework for FSCONS 2010.
          Edip : un programme open source de transformation d'images orienté effets et bien plus   
Edip (Easy Digital Imaging Processing) est un logiciel de traitement d'images orienté effets et filtres, mais il peut faire beaucoup plus. Il est basé sur la bibliothèque opencv-3.0.0 et utilise pour interface homme-machine Gtkmm-3. Edip a été écrit en C++ et utilise le concept MVC (Model View Controller).

  • Model : une bibliothèque statique nommée libedip que vous pouvez réutiliser et modifier selon les termes de la licence GPLv3 ;
  • View : la bibliothèque de widgets (contraction de Windows...

          Las mejores aplicaciones de cámara para profesionales de la fotografía    

Aplicaciones Camara Android Pro

Para hacer una buena foto hacen falta varios ingredientes. El más importante es un buen fotógrafo detrás que siga los consejos imprescindibles. Otras claves son un buen móvil que tenga una buena cámara y unas buenas condiciones de iluminación.

Si juntamos todo esto seguramente podamos exprimir al máximo nuestro smartphone para hacer fotos, pero dejadnos recomendaros un último elemento que os puede ser de gran ayuda. Aquí os traemos las mejores aplicaciones de fotografía profesional para Android.

Después de realizar la foto siempre la podréis retocar desde vuestro Android, pero gracias a estas aplicaciones que os dejamos tendréis a vuestra disposición un modo manual completo para ajustar la ISO, la exposición, el tiempo, el balance y todos los detalles que necesitéis en cada escena.

Camera ZOOM FX Premium

Camera Zoom Fx

Un clásico de la fotografía en Android. Camera Zoom FX nos proporciona multitud de ajustes para realizar imágenes RAW y que salgan perfectas. Podemos seleccionar la distancia de enfoque, ISO, exposición, velocidad de obturación, combinar modos de disparo, establecer acciones para los botones de hardware y añadir todo tipo de efectos.

Tenemos además una lista inabarcable de opciones que van desde realizar collages, crear carpetas personalizadas o activación por voz. Además en un diseño bastante fácil de seguir por lo que incluso los iniciados lograrán cogerle el truco rápidamente.

Camera ZOOM FX Premium

Camera ZOOM FX Premium6.2.9

PhotoPills

Photopills Android

Es una de las recomendaciones más caras de la lista, pero su precisión lo vale. Photopills es una aplicación especializada en recomendarnos, indicarnos y ofrecer ayuda para que elijamos el momento y la posición perfecta para el tipo de foto que queramos hacer.

Será de ayuda independientemente de nuestro nivel como fotógrafo y se incluye un asistente personal para planificar según la posición de los astros, según el crepúsculo y con una potente calculadora para crear timelapses, larga exposición o profundidad de campo.

PhotoPills

PhotoPills1.0.1

Cámara FV 5

Camara Fv5

Cámara FV 5 es otra de esas aplicaciones imprescindibles para cualquiera que desee tener un completo modo PRO. Tenemos el típico visualizador de las DSLR con el tiempo de exposición, apertura, balance de blancos y demás medidores. Podemos crear timelapses, programado automático y fotografías de larga exposición de hasta 30 segundos.

La aplicación procesa fotos JPEG y RAW, tiene un modo de enfoque manual, botón AF-L y zoom digital mediante gesto táctil. Una opción segura que suele actualizarse periódicamente.

Cámara FV-5

Cámara FV-5Varía según el dispositivo

  • Desarrollador: FGAE
  • Descárgalo en: Google Play
  • Precio: 2,99€
  • Categoría: Fotografía

Cameringo+

Cameringo

Una aplicación de fotografía completísima aunque con un diseño más antiguo. Tenemos múltiples lentes de ojo de pez y gran angular, una grabadora de GIFs con varios filtros, control manual para la exposición, contraste y saturación o un flash virtual para selfies. Si queréis jugar y crear las clásicas imágenes en forma de planeta, con Cameringo+ lo tendréis fácil.

Cameringo+ Cámara de Filtros

Cameringo+ Cámara de Filtros2.8.20

HD Camera Pro

Hd Camera Pro

El diseño tampoco es el punto fuerte de HD Camera Pro, pero tenemos las opciones que podríamos necesitar para hacer fotos completas en Android. Enfoque manual, automático, temporizador, disparo continuo, ajuste de blancos, ISO e incluso escáner de códigos QR. También añade obturador silencioso, para evitar hacer ruido cuando estemos fotografiando con el móvil.

HD Camera Pro - silent shutter

HD Camera Pro - silent shutter2.3.1

SnapCamera HDR

Snap Camera

En SnapCamera tenemos por un lado la aplicación de fotografía y por otro un potente editor. Si nos fijamos en la primera parte encontraremos una interfaz que recuerda claramente al de muchas DSLR. Con gestos táctil podremos hacer zoom, enfocar o entrar en los ajustes. Allí tenemos los ya nombrados en el resto de aplicaciones: modo panorámico, temporizador, HDR, balance de blancos, time lapse, control de ISO...

Snap Camera HDR

Snap Camera HDRVaría según el dispositivo

DSLR Controller

Chainfire es un desarrollador muy popular en Android, principalmente en la comunidad root. Pero si tienes una cámara Canon seguramente quieras echar un vistazo a esta aplicación. A través del WiFi o un cable USB podremos controlar la cámara a distancia a través del móvil. No solo eso, sino que nos permite jugar con todos los controles de la propia cámara, desde ajustar la ISO, el enfoque, la apertura o la calidad de imágen y vídeo.

DSLR Controller

DSLR Controller1.02

Footej Camera

Footej Camera Android

Footej Camera tiene un modelo freemium: en la versión gratuita disponemos de los ajustes habituales de enfoque, exposición, velocidad de obturación o formato RAW, además de galería propia y animaciones GIF. En el paquete premium de 1,99 euros en adelante ya tenemos intervalo de encendido, modo disparo de ráfagas, histograma y antibanding. Algo sencilla a nivel de diseño pero satisfará a quienes busquen algo sencillo.

Footej Camera

Footej Camera2.0.4

Open Camera

Open Camera

Para tener una buena aplicación con modo profesional no tenemos porque rascarnos el bolsillo. Terminamos nuestra recomendación de aplicaciones de cámara para Android con Open Camera, una opción open source, gratuita y muy completa. Tenemos desde los ajustes de exposición y controles manuales, temporizador, control por voz y remoto, zoom via gestos, desactivar el sonido, luz para selfies, modo HDR... y todo en una aplicación que presume ser de las más ligeras y actualizadas.

Open Camera

Open Camera1.38.2

En Xataka Android | ¿Cuál es el editor de fotos para Android más completo?

-
La noticia Las mejores aplicaciones de cámara para profesionales de la fotografía fue publicada originalmente en Xataka Android por Enrique Pérez .


          Best Video Editing Software for Mac: Get Open Source Video Editing Software for Mac   

Gone are the days when video editing used to be a skill of few professionals. Nowadays, video editing has become a powerful tool for post-production editors as well as for the people who have zeal to create something on their own. Making a short film or a documentary has become much easier with the numerous […]

The post Best Video Editing Software for Mac: Get Open Source Video Editing Software for Mac appeared first on INDABAA.


          (USA-CA-Palo Alto) Open Source Staff Engineer   
Calling experienced leaders in open source! Do you have experience contributing to, maintaining, and leading open source projects? Have you spent the last few years engaging and collaborating with others around the world in order to jointly create better solutions, better software that makes things easier, faster, or more productive for its users? Can you show us your contributions upstream, show where you've made a difference and demonstrated your skills? Can you point us to the projects you maintain on GitHub or other public repository sites? Then this job opportunity may be perfect for you. We are the Open Source Technology Center (OSTC) in the Open Source Program Office of VMware (part of the Office of the CTO). We are building a strong team of open source developers who know how to engage and collaborate in open source projects. We are a team that aspires to make a difference through contributions to upstream projects, and tying those projects back to the core of our business. This is a small team that is growing fast and we are looking for people to join us in envisioning and building open source solutions – from infrastructure to automation and orchestration, from the kernel to tools to applications. Our focus is open source software development and its usage across the data center, whether it’s on-prem or in the cloud. Together we will move the needle and change VMware's role in this space. We are looking for senior engineers who have a strong work ethic, a make-it-happen mindset, and a track record of successful self-directed engagement with and leadership of upstream projects. In your role you will lead the process to identify opportunities to improve upstream open source projects, from bug fixes to performance and security enhancements and feature additions. Under your guidance, you and the other members of the OSTC team will work collaboratively with upstream on code, community, documentation, and anything and everything that is part of creating successful open source projects. The focus of this role will be around leading edge cloud projects including Kubernetes, Linkerd, and Istio, among others. What are we looking for: - a BS in CS or a related technical field - at least ten years of software development experience - a track record as active committer and (sub-)maintainer in one or more open source projects - solid skills in relevant programming languages and environments: C, C++, Python, Go, Ruby... there are so many choices; you don't have to know them all, but you should be able to show your skills in a couple of them - an open mind set, a collaborative problem solving attitude and the ability to engage in and lead diverse, global teams of developers Why work for our Division: VMware's Office of the CTO (OCTO) is an organization with a broad mission to drive thought leadership through the company, encourage close relationships with our customers, the business and academia. Teams within the OCTO partner closely with technological leaders both internally and externally. The OCTO also drives innovation efforts focusing both on organic bottom-up innovation and external technological innovations. The OCTO is a results-driven organization that focuses on moving these innovations from the nascent idea stage into product, ultimately resulting in increased customer value from the family of VMware products. Why work with our Group: The Open Source Technology Center is a small, fast growing team of (mostly) developers, focused on expanding VMware's footprint and influence in the broader open source community. We consider ourselves experts when it comes to open source, ambassadors of VMware in the open source communities, and mentors and coaches within VMware to spread the expertise around open source methodology and culture across the company. Advertised Location: Existing VMware location in the US or home office. VMware is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: VMware is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at VMware are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. VMware will not tolerate discrimination or harassment based on any of these characteristics. VMware encourages applicants of all ages. VMware will provide reasonable accommodation to employees who have protected disabilities consistent with local law.
          OSGeo, OpenGIS, Open Geospatial: Boundless Reinforces its Commitment to Open Source with Diamond OSGeo Sponsorship - Marketwired (press release)   

read more


          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          Reasons to Choose AngularJs for your Next Project (Mobiloitte Technologies)   
Before proceeding, let's understand what is AngularJS and why it is the most preferred platform? This is a JavaScript framework designed to simplify front-end development. Developed by professional developers from Google, this framework is an open source web application framework which revolves around HTML and preferred for modern single page application development to build...
          Platform9 Raises $22M to Make Open Source Cloud Infrastructure Tech Easier   

Platform9, a startup whose Software-as-a-Service platform takes much of the pain out of using open source cloud infrastructure technologies, has raised $22 million in Series C funding. It supports frameworks like Kubernetes, OpenStack, and Fission. The funding round was led by Canvas Ventures, along with existing investors Redpoint Ventures and Menlo Ventures, with Hewlett Packard Enterprise also

Platform9 Raises $22M to Make Open Source Cloud Infrastructure Tech Easier can be found on Infinite Group Inc..


          Disponibile Miro 2.0   
È da pochi giorni disponibile la versione 2 del programma Miro, una sorta di aggregatore di contenuti video di vario tipo, e ora anche audio.Ricordiamo che Miro è un programma open source che...
          Choosing the Right Website Design for Your Jewelry Store (J Shah)   
Choosing the right company to design and develop new jewelry store websites. We are specialist in creating unique and custom web development and open source web design for your large and small jewelry business.
          #6166: Support DCP playback   

A working support for DCP playback has been recently commited to VLC: https://trac.videolan.org/vlc/ticket/16999

It would be nice if this were supported directly with ffmpeg so all downstream projects would benefit.

There is an open source (I'm a not sure about the license, but VLC ships it) library for playing DCPs: http://www.cinecert.com/asdcplib/download/

Also, DCP relies heavily on JPEG 2000. Decoding of JPEG 2000 is rather performance-demanding. There is a project that uses hardware acceleration (with NVidia) that might actually enable realtime playback: http://apps.man.poznan.pl/trac/jpeg2k

FFmpeg can already play DCPs along the lines of ffmpeg -lowres 1 -ss 0 -r 24 -i 094cedc9-abf9-4e60-b947-26fae2a8b781_picture.mxf -ss 0 -i c98425dc-4b65-42b1-80f4-d8909a65bbc3_sound.mxf -ac: 2 -c:v mpeg2video -f avi - | ffplay - (actual paths need to be of course changed) but it would be nice if one would be able to do somethign along ffplay DCP:///path/to/dcp/directory

Sample DCPs are here: http://www.dcpbuilder.com/download/cinema-packages.html


          #1964: Request support for decoding / demuxing Adobe HDS dynamic http streaming   

Adobe HDS is an adaptive streaming format used primarily to deliver video streams through content delivery networks.

It uses a manifest file (F4F) to describe the segments of a file, and then it adaptively delivers segments and sequences in the "optimal" bitrate depending on the client's bandwith and the total server load.

Certain major content providers are moving to this format (at least in Sweden) and it would be really great if ffmpeg could support it.

A good summary of HDS is found at: http://rdkls.blogspot.se/2011/11/what-i-know-about-http-adaptive.html

It seems that most projects that do decode such streams use a PHP script (!) found at: https://github.com/K-S-V/Scripts/blob/master/AdobeHDS.php

The Open Source Media Player project has action script code which probably does the same thing: http://sourceforge.net/adobe/osmf/

The format is quite similar to applehttp/hls, so it should be possible to borrow some patterns from the support for that format, which is already in libavformat.

I am a C# / java developer and could probably make this happen in C, but it would take a lot of effort. I am not yet familiar with the ffmpeg source code. So I'm hoping that someone might already be working on this? Or that someone well versed on avformat development can take this on. I'd be happy to contribute with my own efforts.

Best regards!


          #3356: feature request: Segment HLS streams on SCTE 35 markers   

Many proprietary Apple HTTP live streaming or other HTTP streaming encoders accept SCTE 35 markers in the input MPEG stream. Using these streams, these segmenters break the segments at the points described in the SCTE 35 message. In addition to this, a comment is inserted into the M3U8 manifest to indicate that the following chunk occurred after a SCTE 35 message.

This is now a very common practive, but no open source solution exists. The great benefit of this is that it allows a downstream piece of software to swap out chunks when such messages occur by simple text manipulation on the manifest file. The most common use case for this is the insertion of ads between 2 SCTE 35 messages in a live stream.

This is becoming a common feature in commercial encoders and it would be great to see it land in ffmpeg.


          #1778: EIA-608 / EIA-708 Closed Captions disappear when transcoding/reencoding   

Summary of the bug: When transoding/reencoding video ffmepg loses the CC data that was embedded within the actual video stream itself. This type of CC is referred to as EIA-608/EIA-708 and is muxed to the video stream following guidelines in SCTE 128 from my research. If you use '-c:v copy' the CC remains intact. I'm capturing live video from the gige port of a Motorola DSR-6100 IRD that is putting out UDP multicast TS.

This PDF give more details regarding how this method of CC works: http://www.evertz.com/resources/eia_608_708_cc.pdf

Here are the files produced from the below command. They are larger than the 10MB requested so I've hosted them on my site, here are the direct links.

http://mikecheat.com/disjrhd.ts - Original(18.2MB) http://mikecheat.com/disjrsd.ts - Reencoded(8.9MB)

There is another open source project that seems to have figured out how to pull this CC data from video. They have the source code on their website, I've included the link. http://zapping.sourceforge.net/ZVBI/index.html

How to reproduce: root@hdmux:/home/mike# ffmpeg -i 'udp://239.1.1.3:6100?fifo_size=9000000' -map 0:p:1:0 -c:v mpeg2video -s 704x480 -r ntsc -b:v 3000k -map 0:p:1:1 -c:a mp3 -ac 2 -ar 48000 -b:a 128k -f mpegts disjrsd.ts -map 0:p:1:0 -c:v copy -map 0:p:1:1 -c:a copy -f mpegts disjrhd.ts ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers

built on Sep 28 2012 14:24:44 with gcc 4.4.5 (Debian 4.4.5-8) configuration: --enable-gpl --enable-nonfree --enable-shared --enable-runtime-cpudetect --enable-libmp3lame --enable-libx264 libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 59.100 / 54. 59.100 libavformat 54. 29.104 / 54. 29.104 libavdevice 54. 2.101 / 54. 2.101 libavfilter 3. 17.100 / 3. 17.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100

[mpegts @ 0x2169240] Unable to seek back to the start [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] non-existing PPS referenced [h264 @ 0x2193a80] non-existing PPS 0 referenced [h264 @ 0x2193a80] decode_slice_header error [h264 @ 0x2193a80] no frame! [h264 @ 0x2193a80] mmco: unref short failure

Last message repeated 2 times

[mpegts @ 0x2169240] max_analyze_duration 5000000 reached at 5003333 [mpegts @ 0x2169240] Estimating duration from bitrate, this may be inaccurate Input #0, mpegts, from 'udp://239.1.1.3:6100?fifo_size=9000000':

Duration: N/A, start: 11940.555644, bitrate: 768 kb/s Program 1

Stream #0:0[0x1e00]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 61.76 fps, 59.94 tbr, 90k tbn, 119.88 tbc Stream #0:1[0x1020](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), s16, 384 kb/s Stream #0:2[0x1021](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, s16, 192 kb/s Stream #0:3[0x1022](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, s16, 192 kb/s

File 'disjrhd.ts' already exists. Overwrite ? [y/N] y muxrate VBR, pcr every 2 pkts, sdt every 200, pat/pmt every 40 pkts [mpegts @ 0x22aad60] muxrate VBR, pcr every 5 pkts, sdt every 200, pat/pmt every 40 pkts Output #0, mpegts, to 'disjrsd.ts':

Metadata:

encoder : Lavf54.29.104 Stream #0:0: Video: mpeg2video, yuv420p, 704x480 [SAR 40:33 DAR 16:9], q=2-31, 3000 kb/s, 90k tbn, 29.97 tbc Stream #0:1(eng): Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

Output #1, mpegts, to 'disjrhd.ts':

Metadata:

encoder : Lavf54.29.104 Stream #1:0: Video: h264 ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 61.76 fps, 90k tbn, 59.94 tbc Stream #1:1(eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), 384 kb/s

Stream mapping:

Stream #0:0 -> #0:0 (h264 -> mpeg2video) Stream #0:1 -> #0:1 (ac3 -> libmp3lame) Stream #0:0 -> #1:0 (copy) Stream #0:1 -> #1:1 (copy)

Press [q] to stop, ? for help [h264 @ 0x28a5740] Missing reference picture [h264 @ 0x28a5740] decode_slice_header error [h264 @ 0x28a5740] concealing 3600 DC, 3600 AC, 3600 MV errors in B frame [h264 @ 0x28a63a0] Missing reference picture [h264 @ 0x28a63a0] decode_slice_header error [h264 @ 0x28a63a0] concealing 3600 DC, 3600 AC, 3600 MV errors in B frame [h264 @ 0x28a68e0] reference picture missing during reorder [h264 @ 0x28a68e0] Missing reference picture [h264 @ 0x28a68e0] decode_slice_header error [h264 @ 0x28a68e0] concealing 3600 DC, 3600 AC, 3600 MV errors in P frame [h264 @ 0x28a7360] mmco: unref short failure [h264 @ 0x2c4ddc0] mmco: unref short failure frame= 656 fps= 47 q=2.0 Lq=-1.0 size= 9102kB time=00:00:21.93 bitrate=3398.6kbits/s dup=22 drop=628 video:24157kB audio:1326kB subtitle:0 global headers:0kB muxing overhead -64.284273% Received signal 2: terminating.

You can't use my command exactly as is because I'm capturing from a live source. You can use one of the above files to emulate the problem. If any other info is needed please let me know.

Thank you


          Tibco Architect - GC/ US Citizen Only - BlueFusion INC - Johns Creek, GA   
Tibco Solutions Architect*. Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). Johns Creek, GA*....
From Indeed - Tue, 27 Jun 2017 19:49:57 GMT - View all Johns Creek, GA jobs
          Senior Developer -Java & TIBCO (Contract Project) - softvision (tams edition) - Johns Creek, GA   
Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). We are looking for an exceptional Senior Developer (Java &amp; Tibco) to work with...
From softvision (tams edition) - Tue, 27 Jun 2017 23:47:53 GMT - View all Johns Creek, GA jobs
          Storage, Search & User Page Updates   
Happy July 4th weekend, here are some updates! 1) We're running out of storage space and have elected to build a custom storage solution rather than expand on the NAS we currently use. @BrenTheMan did something similar before we moved to managed hosting - instead of buying a new NetApp, Bren built an open source solution and it was great. This new solution gives us a huge storage increase and is expected to outperform the NAS. The transition will be starting this week, with some downtime during the final push. 2) This week we will also be adding a server to run Elasticsearch. @liljim has been busy building an elastic index of all things NG and @PsychoGoldfish is wrapping up the front end. Don't expect to see it launch this week but it's coming soon and we'll be testing it first with Supporters while we tweak results and performance. It runs way faster than our current search but will also be searching way more content - no more sending people to Google results when they want to search beyond one portal! 3) New user pages are also looking sharp and getting closer to the big reveal. We've been getting feedback from supporters in the Supporter Forum and making additional improvements as we go. Super excited to see them go live. This is a big month for NG upgrades! As a result of increased hosting expenses, our monthly support goal has been raised to $6k. That still doesn't include ongoing development, system administration or any other expenses associated with running NG. None of this is possible without the support of the community, so thank you everyone who helps keep this going! It only costs $3 per month to be a supporter, or $25 for the year. Monthly subscribers are great because they give us a consistent budget. You can also pay more than $3 per month if you like, your monthly commitment is adjustable and any amount is appreciated! In other news, it's monthly voting time! Check your inbox to see if @P-Bot selected you for the monthly voting panel. I was planning to kick off Summer Animation Jams by now but I've been distracted with preparing the Comic-Con demo of Nightmare Cops. Who out there wants to do Summer Animation Jams? Last summer we hosted the Loop Jam, Robot Jam, Sound Jam, Rejected Mascot Jam and Creep Jam. We also commissioned a bunch of site skins based on original characters from each jam, which was a lot of fun. In Nightmare Cops news, @JazLyte and @RomeoJr have come aboard as voices. If you're at Comic-Con in San Diego this month, NC is part of the Behemoth & friends showcase in Booth 229! Nightmare Cops is actually in the FRIENDS section because it's being developed as a Newgrounds release. Exciting times!
          BEST PHP Training in Noida   
PHP (recursive acronym for PHP Hyper Preprocessor) is a widely-used open source general
          Jely Bean’s Source Code Made Available!   
Nexus 7 and Android Jelly Bean, Google’s latest operating system for mobile devices has been in news for quite a while and as Google promised, they made available to the Android Open Source Project. The source code has been released and that makes the Android what it truly is i.e. An Open Source. The statement actually came from Jean-Baptiste Queru.
          Freeciv 2.1.10   
The Civilisation clone, Freeciv is an empire building strategy game with an open source community of developers.







          Back-End Developer munkakörbe keresünk munkatársat. | Feladatok: Perform requirements and busin...   
Back-End Developer munkakörbe keresünk munkatársat. | Feladatok: Perform requirements and business analysis • Create technical specifications • Design software architecture • Software implementation and maintenance • Manage bug tracking, version control. | Mit ajánlunk: Working mainly in Hungary, but possibility for working abroad on a regular basis • Big clients, mature technologies where you can learn a lot about the industry standards along with? • Challenging, inspiring green field projects variety • Cutting-edge technology and methodology agility in practice • Flexible working hours, no overtime time for your private life • Great work environment • top of the line PCs and office • near Metro 3, Duna Plaza, Tesco | Elvárások: 3+ years of application development, including complex, high-volume applications • OO analysis and design • Familiarity with a variety of open source tools and libraries • Strong Java skills, expertise with Spring and Java EE APIs • Deeper knowledge of relational databases and SQL • Basic web font-end development skills HTML, CSS, JS • Willingness and motivation to learn new frameworks & languages • Fluent English | További elvárások: Intermediate/Advanced web font-end development skills HTML5, CSS prepocessors, JS frameworks | További infó és jelentkezés itt: www.profession.hu/allas/1040821
          DevOps mérnök munkakörbe keresünk munkatársat. | Feladatok: A munkakörbe tartozna a szoftverfej...   
DevOps mérnök munkakörbe keresünk munkatársat. | Feladatok: A munkakörbe tartozna a szoftverfejlesztés támogatása, integráció különféle megoldásokkal, tesztelés és hibakeresés, ügyfél támogatás.. | Mit ajánlunk: Rugalmas munkaidő, otthoni munkavégzés. | Elvárások: Linux/Unix környezetben szerzett üzemeltetői gyakorlat mellett a vállalati Windows rendszerek ismerete. • Valamely szkript nyelv shell, perl, python, powershell, stb ismerete. • Informatikai biztonsági alapismeretek. • Szakmai angol nyelvtudás | További elvárások: Virtualizációs és konténer megoldások VMWare, KVM, Docker, LXC, stb ismerete. • SIEM, naplózás és log menedzsmenttel kapcsolatos tapasztalat. • Verziókezelok SVN, Git, stb. használata. • Csomagkezelő rendszerekkel szerzett fejlesztési tapasztalat deb, rpm, msi, stb telepítő csomagok készítése. • CI eszközök buildbot, jenkins, stb. ismerete. • Konfiguráció menedzsment eszközök Chef, Ansible, Puppet, stb ismerete. • Unix AIX, Solaris, HP-UX, BSD rendszereken szerzett tapasztalat. • Open Source projektben való részvétel vagy közösségi munka. • Jó dokumentálási készség rendszer-, felhasználói dokumentáció. • Önállóság, rugalmasság és jó problémamegoldó készség. | További infó és jelentkezés itt: www.profession.hu/allas/1041226
          BeyeNETWORK Announces Release of Research on Open Source Adoption   
New research report by Mark Madsen on the evolution of open source technology adoption and the factors influencing adoption in the business intelligence and data warehousing segment is released.


          Integrating Ingres in the Information System: An Open Source Approach   
This white paper describes a number of real-life interoperability scenarios for Ingres and explains how an open source approach helps solving the interoperability challenge.


          Canibalizando el mercado OSBI   
No hay peor enemigo que el propio de la familia, ¿estará canibalizando la versión open source de los productos a los servicios que se ofrecen?


          Progress Software Introduces FUSE to Support Open Source Integration   
Progress Software launches FUSE Forge to simplify the development of open source integration projects.


          Samsung Galaxy S8 и Galaxy S8+. (#142)   
цитата:
s3va:
ivdeml
Поставь Adhell (бесплатная, вроде даже Open Source так что при желании можно самому собрать, при этом убедившись в отсутствии закладок), там раздел Package Disabler

Никаких хаккерских гюкал на телефоне в 60000 рублей устанавливать не надо. Я ж там расписывался (хотя тут модераторы все удаляют, правят, подчищают), что не мой телефон. У меня нет столько денег. Телефон пожилой женщины. Задала мне простой вопрос. А я сюда, к вам. По-ошибке, думал, что тут можно вопрос задать простой. Название темы ввело в заблуждение.
ну, ничего. Все же просто, на самом деле. Американцы объяснили все по шагам. Без всяких установок ненужных программ. Хаккерских шпионских программ. И прочих штук.
Реальные владельцы бы знали, если бы они тут были.

https://www.youtube.com/watch?v=Eami7pVLZ2s

https://www.youtube.com/watch?v=_GfE2p7fR0g

Офигеть какое открытие! заметить кнопку "отключить биксби" размером в полэкрана! Орлы, что тут сказать...

Это не отключает bixby т.к. оно всё равно вылазит если случайно нажать кнопку bixby. Кроме того, все 6 сервисов bixby продолжают выполняться на устройстве, возможно шпионя за вами (да, шпионить могут не только хакеры, но и производители устройств тоже), а также потребляя дефицитное процессорное время и память.

Если системно отключить эти сервисы, они не могут запуститься вовсе. Но в системных установках отключить их не удается (кнопка просто не нажимается), вот и приходится использовать сторонние утилиты.

А если считать всё стороннее ПО хакерски-шпионским то завем смартфон использовать?
          Clonare partizioni e dischi con l'applicazione free Clonezilla live 2.5.2-17   
Clonezilla è una applicazione Open Source, e più precisamente un OpenSource clone system (OCS), finalizzata al salvataggio e al ripristino dei dati contenuti in una o più partizioni di un hard drive. Il comportamento di Clonezilla è quindi molto simile a quello di un ben noto software commerciale,...
          Untagle Open Source Network Gateway   
Altra interessante distribuzione Linux orientata alla gestione dei servizi di rete, con tanto di interfaccia di configurazione e amministrazione del sistema. Untagle offre una potente suite di applicazioni per la gestione di servizi Internet, Intranet per le piccole-medie aziende. Untangle è un sistema firewall multi-funzione in grado di semplificare e consolidare la rete con numerosi […]
          Fuchsia Rumors – Is Google Planning to Replace Android With a New Mobile OS?   
google_fuchsia_logo

Recently, it has emerged that Google is working on its third OS after Android and Chrome OS, called Fuchsia. Google Fuchsia is a universal OS and runs on desktops, tablets, and smartphones. It is a real-time, open source operating system and is based on Google’s own kernel Magenta.

The new OS appeared briefly in August last year, before disappearing into oblivion. However, recent activity on Fuchsia has re-surfaced bringing along all sorts of speculation about Google’s intentions with the OS.

Read Fuchsia Rumors – Is Google Planning to Replace Android With a New Mobile OS? by Sarah Hanks on TechNorms.

      

          Business Game Changers Radio with Sarah Westall: Open Source Engineering Everything with Robert David Steele   
EpisodeNobel Peace Prize nominee, Robert David Steele, rejoins the program to discuss Open Source Everything Engineering.   Rest of episode Description coming Soon... 
          Business Game Changers Radio with Sarah Westall: Open Source Intelligence: Taking Back Government Secrecy   
EpisodeIntelligence agencies and black projects have come under much more scrutiny as whistleblowers such as Snowden have come forward with evidence showing mass surveillance and secrecy that is not only defying the intent of the constitution, but also betraying the trust of the American people. Additionally, trillions of dollars have been spent on black projects conducting missions all over the world without congress or the American people knowing what for. When we have had whistleblowers come for ...
          Senior Data Architect - Stem Inc - San Francisco Bay Area, CA   
Help design, develop and implement a resilient and performant distributed data processing platform using open source Big Data Technologies....
From Stem Inc - Tue, 27 Jun 2017 05:52:01 GMT - View all San Francisco Bay Area, CA jobs
          Any open source HUD or hand trackers? (Python)   
Hey, I'm currently learning python and am looking for a new programming challenge. Since I enjoy playing poker, I figured it'd be a fun project to build a poker calculator, something akin to Tournament indicator. Now, the main problem I've got is that I've got absolutely no clue on how I'd...
          Help Wanted   

Developers of all sorts are welcome to join the pennylender team! Please review the document entitled "PennyLender Open Source Framework" for
an overview of the system.


          An open source solution for dockerizing STAF and LTP on IBM Power Systems https://t.co/jgGJmJBfHa https://t.co/xr3BFsGD8m   
An open source solution for dockerizing STAF and LTP on IBM Power Systems https://t.co/jgGJmJBfHa ht
          jSyncManager Blog now Online.   

Brad BARCLAY, Lead Developer and Project Administrator of the jSyncManager Project is proud to announce the launch of his new development blog, entitled "The jSyncMan". This blog is designed for informal commentary on the jSyncManager and other Open Source development projects, along with helpful development articles, and is viewable via http://blog.jsyncmanager.org.

An article on jConduit development for beginners is now online to launch this new blog, with more such tutorials coming up in the future on topics such as using the block and record handler objects for the PalmOS standard applications, writing AbstractInstaller jConduits, and using the jSyncManager synchronization engine in your own applications.

Comments on this new information facility can be sent to bbarclay@jsyncmanager.org.


          Comment on Online Collaboration by ArcherTC   
I have used DimDim on several occasions and have found it to be a no-brainer alternative to WebEx and GoToMeeting, both of which I have used in professional settings. Notably, the company also offers a self-hosting Open Source solution that can be branded with your own logo.
          .NET Framework 4.6, .NET Core 5, ASP.NET 4.6 e ASP.NET 5: un po' di chiarezza   

Ieri Microsoft ha annunciato, durante l'evento #VSConnect, diverse cose interessanti riguardo ASP.NET, il .NET Framework, Visual Studio 2015 e tutta una serie di altre tecnologie. Un recap lo trovare qui.

Oggi mi soffermerò specificatamente sull'impatto che hanno questi annunci e novità sul versante web. .NET 2015 (si passa ad un nome che non include il numero di versione ed è allineato con quello di Visual Studio, per fare meno confusione) in realtà include due versioni del .NET Framework:

  • .NET Framework 4.6: quello che conosciamo già e che gira su Windows;
  • .NET Core 5: una nuova versione capace di funzionare su Windows, Linux e MacOSX, con BCL, runtime e Gargabe Collector open source.

La prima variante include il supporto per ASP.NET 5 (che è poi il nuovo modello non compatibile al 100% con l'attuale, conosciuto con il nome di ASP.NET vNext e compatibile solo con ASP.NET MVC e WebAPI), ASP.NET 4.6 (una release compatibile con il modello attuale, che include anche il supporto a Web Forms), WPF (che riceverà nuove feature) e WinForms (si, quella cosa vecchia già nel 2005... :)).

.NET Core è invece la vera novità e include il supporto solo per ASP.NET 5 (Windows, Linux e MacOSX) e .NET Native (una versione di .NET "compilata", per Windows 10 e le sue varianti desktop, mobile e embedded).

In tutto questo, le due versioni hanno in comune runtime (RyuJIT, il nuovo JITter), i nuovi compilatori (Roslyn) e le nuove librerie. Quindi, anche usando .NET Framework 4.6 (con ASP.NET 4.6), perché volete Web Forms e non volete andare su altre piattaforme che non siano Windows, riceverete gli stessi benefici e non sarete lasciati indietro. Tra l'altro ci saranno alcune novità (piccole, ma interessanti) anche per Web Forms.

Infine, la versione si sceglie comodamente da Visual Studio 2015 e avrete diverse novità nei tool di sviluppo e nel supporto per la parte web, sia che scegliate ASP.NET 5, sia che optiate per la 4.6.

Tags: , , , , , , ,

Continua a leggere .NET Framework 4.6, .NET Core 5, ASP.NET 4.6 e ASP.NET 5: un po' di chiarezza.


(C) 2017 ASPItalia.com Network - All rights reserved


          FluidLite, un synthétiseur logiciel SoundFont open source   
Pour mon prochain logiciel j’avais besoin d’un synthé software qui soit capable de reproduire un nombre assez large d’instruments de manière réaliste, tout en restant le plus léger et portable possible (je vise Windows, Mac, Android et iOS entre autres). Mon choix s’est porté sur le format SoundFont, qui permet de stocker un très grande nombre [...]
          Putting the “BI” in Big Data   

Originally posted on: http://geekswithblogs.net/andrewbrust/archive/2011/10/16/putting-the-ldquobirdquo-in-big-data.aspx

Last week, at the PASS (Professional Association for SQL Server) Summit in Seattle, Microsoft held a coming out party, not only for SQL Server 2012 (formerly “Denali”), but also for the company’s “Big Data” initiative.  Microsoft’s banner headline announcement: it is developing of a version of Apache Hadoop that will run on Windows Server and Windows Azure.  Hadoop is the open source implementation of Google’s proprietary MapReduce parallel computation engine and environment, and it's used (quite widely now) in the processing of streams of data that go well beyond even the largest enterprise data sets in size.  Whether it’s sensor, clickstream, social media, location-based or other data that is generated and collected in large gobs, Hadoop is often on the scene in the service of processing and analyzing it.

Microsoft’s Hadoop release will be a bona fide contribution to the venerable open source project. It will be built in conjunction with Hortonworks, a company with an appropriately elephant-themed name (“Hadoop” was the name of the toy elephant of its inventor’s son) and strong Yahoo-Hadoop pedigree.  Even before PASS, Microsoft had announced Hadoop connectors for its SQL Server Parallel Data Warehouse Edition (SQL PDW) appliance.  But last week Microsoft announced things that would make Hadoop its own – in more ways than one.

Yes, Hadoop will run natively on Windows and integrate with PDW.  But Microsoft will also make available an ODBC driver for Hive, the data warehousing front-end for Hadoop developed by FaceBook. What’s the big deal about an ODBC driver?  The combination of that driver and Hive will allow PowerPivot and SQL Server Analysis Services (in its new “Tabular mode”) to connect to Hadoop and query it freely.  And that, in turn, will allow any Analysis Services front end, including PowerView (until last week known by its “Crescent” code name), to perform enterprise-quality analysis and data visualization on Hadoop data.  Not only is that useful, it’s even a bit radical.

As powerful as Hadoop is, it’s more of a computer scientist’s or academically-trained analyst’s tool than it is an enterprise analytics product.  Hadoop tends to deal in data that is less formally schematized than an enterprise’s transactional data, and Hadoop itself is controlled through programming code rather than anything that looks like it was designed for business unit personnel.  Hadoop data is often more “raw” and “wild” than data typically fed to data warehouse and OLAP (Online Analytical Processing) systems.  Likewise, Hadoop practitioners have had to be a bit wild too, producing analytical output perhaps a bit more raw than what business users are accustomed to.

But assuming Microsoft makes good on its announcements (and I have pretty specific knowledge that indicates it will), then business users will be able to get at big data, on-premise and in-cloud, and will be able to do so using Excel, PowerPivot, and other tools that they already know, like and with which they are productive.

Microsoft’s Big Data announcements show that Redmond’s BI (Business Intelligence) team keeps on moving.  They’re building great products, and they’re doing so in a way that makes powerful technology accessible by a wide commercial audience.  For the last seven years, SQL Server’s biggest innovations have been on the BI side of the product.  This shows no sign of stopping any time soon, especially since Microsoft saw fit to promote Amir Netz, the engineering brain trust behind Microsoft BI since its inception, to Technical Fellow.  This distinction is well-deserved by Mr. Netz and its bestowal is a move well-played by Microsoft.

Last week’s announcements aren’t about just Big Data; they’re about Big BI, now open for Big Business.


          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          Microsoft Cloud Day - the ups and downs   

Originally posted on: http://brustblog.com/archive/2012/06/23/microsoft-cloud-day---the-ups-and-downs.aspx

The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day.

I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable.

I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure.

Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least.

As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations.

Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well.

So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

 


          Microsoft and the open source community   

Originally posted on: http://brustblog.com/archive/2012/03/28/microsoft-and-the-open-source-community.aspx

For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.
For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.


          Ontwerper Open Source senior   
Aanvraag 20120969 Tijdelijke inhuur Voor het Ministerie van Defensie zijn wij op zoek naar een Senior Ontwerper Open Source. Werkzaamheden: A. Werkt de complexe architectuur en kaders uit, door: - het vertalen van de functionele eisen naar een functioneel ontwerp; - het vertalen van het functioneel ontwerp naar het technisch ontwerp; - het opstellen van (inrichtings)documentatie; - het vertalen van de uitvoeringsbepalingen naar te nemen maatregelen; - het opstellen van een quality certificeringen...
          Senior DevOps Engineer - (Watertown)   
ID 2017-2041Job Location(s) US-MA-WatertownPosition Type Permanent - Full TimeMore information about this job:Overview: This role is based within our Global Technical Operations team. Mimecast Engineers are technical experts who love being in the centre of all the action and play a critical role in making sure our technology stack is fit for purpose, performing optimally with zero down time.In this high priority role you will tackle a range of complex software and system issues, including monitoring of large farms of servers in multi geographic locations, responding to and safeguarding the availability and reliability of our most popular services.Responsibilities: Contribution and active involvement with every aspect of the production environment to includeDealing with design issuesRunning large server farms in multiple geographic locations around the worldPerformance analysisCapacity planningAssessing applications behaviorLinux engineering and systems administrationCrafting SQL queriesArchitecting and writing moderately-sized toolsTweaking router and switch configurationsYou will focus on solving difficult problems with scalable, elegant and maintainable solutions Qualifications: In depth expertise in Linux internals and system administration including configuration and troubleshooting.Hands on experience with performance tuning of Linux OS (CentOS) in identifying bottlenecks such as disk I/O, memory, CPU and network issuesSolid scripting skills in Shell/Ruby/Perl/PythonStrong understanding of IP networking, including familiarity with concepts such as OSI stackAbility to analyze network behavior, performance and application issues using standard toolsHands on experience in automated provisioning for server farms (using tools such as Kickstart, Cobbler etc).Hands on experience in configuration management of server farms (using tools such as mcollective, Puppet, Cfengine, Chef etc..)Hands on experience with open source monitoring and graphing solutions such as Nagios, Zabbix, Zenos and Munin.Strong understanding of common Internet protocols and applications such as SMTP, DNS, HTTP, SSH, SNMP etc.Experience running farms of servers (at least 200+ physical servers) and associated networking infrastructure in a production environmentHands on experience working with server hardware such as HP Proliant, Dell PowerEdge or equivalentBe comfortable with working on call rotas and out of hours working as and when required to ensure uptime of services RequirementsDesirable skills: Experience with switches, routers, firewalls and load balancersWorking with PostgreSQL databaseAdministering Java based applicationsWorking knowledge of routing protocols such as BGPExperience running high end Fortigate firewallsReward:We offer a highly competitive rewards and benefits package including dental, healthcare, vision, and life insurance options, Flexible Spending Accounts, and a 401(k) plan. Mimecast is an entrepreneurial and high growth company which will provide the right candidate with a wealth of career development opportunities.
          Java Architect - (Boston)   
Architect for building Securities Finance trading platform:* Provide enterprise architect solution and the roadmap to build complex application* Provides technical expertise in analyzing, designing, estimating, developing, and testing/debugging software applications to project schedule.* Designs functional/system integration tests and automation* Provides architectural guidance to the development team.* Provides subject matter expertise in reviewing, analyzing, and resolving complex issues.* Follows Agile SCRUM methodology for development of business functionality Technical Skills * Extensive programming skills in J2EE, Core Java, Tomcat, JBoss, Oracle, SQL, PLSQL, Spring, Hibernate, Web Services, Angular JS, JSON, Jquery, Bootstrap, CSS, Jscript, Web & Network Architectures (HTTP Services, Load Balancers, Proxy services, ect.), Maven, Clearcase, SVN, GIT, Junit, JTest, Jenkins, Cucumber, Hudson, Clover, MQ, PHP, Perl, scripting languages and other open sources technologies etc..* Strong presentation/white boarding skills; lead architecture discussions in front of business leaders and technically strong colleagues* Strong knowledge of test-driven development and continuous integration* End to end understanding of software architecture, design, development and implementation.* Extensive hands-on experience with web-services and knowledge of SOA, web services standards, and other approaches to service-oriented integration. General Profile * Bachelor Degree in computer science or equivalent technical experience* 10+ years of Enterprise level architecture experience in major financial service firms.* Shares own expertise with others; will co-ordinate activities of others/the team* Strong written & verbal communication skills. Experience writing, reviewing technology documents including user guides.
          Site Reliability Engineer - (Watertown)   
This role is based within our Global Technical Operations team. Mimecast Engineers are technical experts who love being in the centre of all the action and play a critical role in making sure our technology stack is fit for purpose, performing optimally with zero down time.In this high priority role you will tackle a range of complex software and system issues, including monitoring of large farms of servers in multi geographic locations, responding to and safeguarding the availability and reliability of our most popular services.Contribution and active involvement with every aspect of the production environment to includeDealing with design issuesRunning large server farms in multiple geographic locations around the worldPerformance analysisCapacity planningAssessing applications behaviorLinux engineering and systems administrationCrafting SQL queriesArchitecting and writing moderately-sized toolsTweaking router and switch configurationsYou will focus on solving difficult problems with scalable, elegant and maintainable solutionsIn depth expertise in Linux internals and system administration including configuration and troubleshooting.Hands on experience with performance tuning of Linux OS (CentOS) in identifying bottlenecks such as disk I/O, memory, CPU and network issuesSolid scripting skills in Shell/Ruby/Perl/PythonStrong understanding of IP networking, including familiarity with concepts such as OSI stackAbility to analyze network behavior, performance and application issues using standard toolsHands on experience in automated provisioning for server farms (using tools such as Kickstart, Cobbler etc).Hands on experience in configuration management of server farms (using tools such as mcollective, Puppet, Cfengine, Chef etc..)Hands on experience with open source monitoring and graphing solutions such as Nagios, Zabbix, Zenos and Munin.Strong understanding of common Internet protocols and applications such as SMTP, DNS, HTTP, SSH, SNMP etc.Experience running farms of servers (at least 200+ physical servers) and associated networking infrastructure in a production environmentHands on experience working with server hardware such as HP Proliant, Dell PowerEdge or equivalentBe comfortable with working on call rotas and out of hours working as and when required to ensure uptime of services RequirementsDesirable skills: Experience with switches, routers, firewalls and load balancersWorking with PostgreSQL databaseAdministering Java based applicationsWorking knowledge of routing protocols such as BGPExperience running high end Fortigate firewallsReward: We offer a highly competitive rewards and benefits package including dental, healthcare, vision, and life insurance options, Flexible Spending Accounts, and a 401(k) plan. Mimecast is an entrepreneurial and high growth company which will provide the right candidate with a wealth of career development opportunities.
          Front End Developer - (Boston)   
Overall Summary As a member of the Web Development team, this candidate will have a strong background in cross-browser CSS 3, HTML 5, experience with JavaScript and open source frameworks like jQuery, bootstrap.Candidate will have designed and built dynamic, high quality, interactive interfaces for web applications that have broad consumer appeal. Extensive experience in developing front-end using HTML, CSS and JavaScript is a requirement. Exposure to mobile application development is preferred.Major ResponsibilitiesDesign and develop the presentation layer for web based applications as functional prototypes with cross-browser compatibility.Work with different business partners to build the optimal user experience.Participate as an active member of a small, experienced, energetic team on a rapid, agile development schedule.Convert designs into complex user interfaces using interface technologies such as (X)HTML, CSS and JavaScript.Build and manage prototypes to support reviews, usability evaluation and visioning.Bridge design and development concepts between marketing design and web developmentHave familiarity with change control processes and full acceptance of those guidelinesProvide graphical elements and page level designs, as necessary.Prior experience working with design agencies to collaborate on UX is a plus Experience/Skills RequiredBachelor's degree, preferably in Computer Science or MIS Technical/Analytical Skills Required:3+ years of web site/web application design experience required.Familiar with collaboration tools like SharePoint, Office 365Social media experience/integration (twitter, Facebook, LinkedIn, rss feeds)Working experience with Responsive and Adaptive websites.A strong proponent of clean, valid, maintainable, and semantically correct HTML/CSS, including HTML5/CSS3In-depth knowledge of the capabilities of IE, Firefox, Safari, and ChromeExperience using tools like Firebug, ySlow etc. for debugging purposesExperience with Adobe Photoshop and Adobe Illustrator.Experience with converting designs into HTML and CSS.Working experience with PHP or ASP is required.Experience with cross-platform, cross-browser strategies.Strong business, technical, problem solving and analytical skills.Experience working with a CMS tool is a plus but not a requirementExperience with JavaScript libraries such as jQuery, bootstrap etc.Experience using a source control system such as Subversion, GitHub.Bootstrap Jquery
          Junior Web Applications Developer - (Cambridge)   
The Berkman Klein Center seeks an enthusiastic web applications developer with strong people skills. The successful candidate will develop web-based full stack applications from the database to the UI.The developer will need to keep current with tools and technologies of open source software development and database architecture and will release much of their work under FOSS licenses. Working with great in-house talent, position is ideal for someone who learns enjoys tinkering/solving a range of dynamic technology problems and is comfortable in a fun, dynamic and fast-moving environment.Candidate should have good communication skills; be adept at working with both non-technical and technical people; able to take initiative and follow through with minimal supervision; be patient, curious, and perseverant; be flexible and be able to prioritize and to manage multiple needs; works well as part of a team and independently.
          Contract Python Developer - (Boston)   
The contractor's role will be to implement these pipelines in a secure, HIPAA compliant, production environment. Adhering to open source code standard, the developer will be responsible forworking with the team to create a code structure that can growdeveloping a code base that is scalable, maintainable and most of all understandablewriting unit tests for key code featuresoptimizing resource-heavy code in terms of memory and speed of executioncreate secure API endpointsThe ideal developer:has a track record of writing clear and simple codehas shipped production-ready code on different backend projects in python 3be excited by data sciencehas written and shipped APIsis experienced in object oriented codingOur technical stack isPython 3.6+, anaconda distributionNLP libraries such as NLTK or spacyScikit-learn and other machine learning librariesPostgresqlGit, Docker, Procfile Must have active work authorization without need for sponsorship Python, Git, Docker, Procfile Source: http://www.juju.com/jad/000000008pixtu?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          AudioKit – an open source API using Csound   
AudioKit is an open source API to provide and easy and intuitive way to write audio applications entirely in Objective-C or Swift, using Csound as the...

Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon


Copyright © cSounds.com [AudioKit - an open source API using Csound], All Right Reserved. 2017.

          Full Stack Rails Developer - Domain7 - British Columbia   
We lean heavily toward open source tech, and our framework choice of the past couple years has been Ruby on Rails , but we’ve also worked with the LAMP stack....
From Domain7 - Mon, 17 Apr 2017 17:09:19 GMT - View all British Columbia jobs
          Comment on [KR1091] Keiser Report: ‘Oligarchic America’ by Vonda Bra   
!!! Morris - 1.7.17 RE-UPLOAD Petya Not Really Ransomware - Open Source Programmer ".. Redhuan D. Oon is a 35 year veteran Open Source guru at www.red1.org .." https://www.youtube.com/watch?v=zc8FuuzGGrc (12) !!!
          Stagiaire Animateur de Communauté Open Source H/F   
Entreprise : Euro Information est la filiale Informatique du groupe de bancassurance Crédit Mutuel &hellip;
          Stagiaire Animateur de Communauté Open Source H/F   
Entreprise : Euro Information est la filiale Informatique du groupe de bancassurance Crédit Mutuel &hellip;
          Setting up a simple Mogre Application in F#   
Game engine panorama is pretty crowded. There are open source ones, commercial ones and so on. Ogre is properly defined as an “Open Source 3D Graphics Engine”, which means that it is at a lower level than a fully-fledged Game Engine. You can use Ogre for 3D rendering and assemble your homemade Game Engine picking … Continua a leggere
          Master en Software Libre de Gestión: Open Source & ERP II   
El Master en Software Libre de Gestión: Open Source & ERP II te permite afrontar con éxito los retos actuales que el management empresarial demanda, ofreciéndote una visión integral en términos de negocio, eficiencia, optimización y racionalización de las inversiones, a través de las Enterprise Applications. La adopción del...
          pkgsrcCon 2016 report   
pkgsrcCon is the annual technical conference for people working on pkgsrc, a framework for building over 17,000 open source software packages. pkgsrc is the native package manager on NetBSD, SmartOS and Minix, and is portable across many different operating systems including Linux and Mac OS X.

The last year's pkgsrcCon 2016 event took place in Kraków, Poland.

Slides are available on the event site.

Video recordings are stored at https://archive.org/details/pkgsrcCon-2016.

We would like to thank the organizers, sponsors and promotion from the Jagiellonian University, The NetBSD Foundation, Programista Magazyn, OS World, and Subcarpathian BSD User Group.

          Senior Linux Storage Software Engineer - RSD - Intel - Hillsboro, OR   
Able to work directly with external companies, open source communities and across business units within Intel....
From Intel - Sat, 24 Jun 2017 10:26:17 GMT - View all Hillsboro, OR jobs
          Data Analyst/Munger   
VIC-Melbourne, 4-8 Week Contract CBD Location High Priority Project Data Analyst/Munger Seeking a dynamic and agile data analyst for an exciting and highly regarded project currently being undertaken by our government client. As a data analyst you will be focused on the “data munging” component of the project; de-silo, deconstruct and transformation of open source data. The initial duration of the contract will
          Comment on Viber halts development of its Windows 10 PC and Mobile app by Grant Gailey   
While all of the forgoing article is accurate and well written perhaps it is also important to consider that Microsoft is staffed and managed by the best and brightest development team in every relevant field. Long term product development appears unmatched. In fact, when Microsoft marketing decides to unleash the next generation of mobile devices that can also run open source android applications on quality mobile devices it may well be game, set and match for cost cutting, junk ad peddlers like some you find today at the google store. Remember, it did not take Microsoft a decade to start showing a profit. They always hit the ground at a dead run.
          Tibco Architect - GC/ US Citizen Only - BlueFusion INC - Johns Creek, GA   
Tibco Solutions Architect*. Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). Johns Creek, GA*....
From Indeed - Tue, 27 Jun 2017 19:49:57 GMT - View all Johns Creek, GA jobs
          Senior Developer -Java & TIBCO (Contract Project) - softvision (tams edition) - Johns Creek, GA   
Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). We are looking for an exceptional Senior Developer (Java &amp; Tibco) to work with...
From softvision (tams edition) - Tue, 27 Jun 2017 23:47:53 GMT - View all Johns Creek, GA jobs
          Mac hacks for research   
As much as I sometimes want to think that Apple is the new Microsoft, I can't deny that they've got something that the evil empire never had - fanatical users who are loyal not because they have to be, but because they truly love Macs. In fact, they love Macs so much that they often devote their free time to developing stunning software applications that range from the quirky and fun (think Delicious Library) to the "how did I ever live without it?" (think Papers). The enormous array of applications available for Macs, unrivaled in their aesthetics, ease of use, and depth of features, serves to reinforce the Mac's reputation as the platform of choice for trendsetting computer users.

It turns out that this is true in the scientific domain as well. Joel Dudley, founder of MacResearch, gave a guest talk for my lab today on a dizzying array of Mac tips, tricks, and software meant to optimize the Mac experience, especially in a scientific research environment. Some of the applications he mentioned looked truly extraordinary, and I thought I'd describe some of the more notable ones here for those interested in getting more out of their Macs.



Macnification
For the cell and molecular biologists out there, here's a solution for your image processing needs. Macnification is like an extended iPhoto for microscopy. The full feature set looks impressive - you can track experiments, manage metadata, make measurements, create movies, and generate virtual z-slices through multiple images, all in one sleek application. I don't work with microscopy images, but now I wish I did!







NodeBox
For Python programmers wanting to flex their artistic side, NodeBox allows you to create amazingly complex graphics and animations with just a few lines of Python code. NodeBox is free and open source, with plenty of example scripts to get you started. Just looking through their online gallery is enough to get the "what-if" juices flowing.




Graph Sketcher and DataGraph
If you hate pretty much everything about Excel graphs, you might like everything about these two graph programs. Graph Sketcher is for quick, brainstorming type graph drawing - use their simple tools to draw pretty much any abstract relationship in 2D, with or without data. DataGraph is more powerful and meant to plot large volumes of data. The defaults start out fairly aesthetically pleasing, but there are many many ways to tweak the look of graphs, add or switch data, add additional axes, and plot multiple dimensions simultaneously. Both applications export to PDF for high-resolution figures, with DataGraph allowing export to vector-based formats as well for use in publications.

In addition to these, the Omni group has a suite of applications for boosting productivity, managing information, and drawing high-quality graphics (much more easily than the impossibly hard to use Adobe Illustrator); Journler is a great Mail-like program for organizing notes such as your lab notebook; and, of course, Papers is a must for anyone who reads scientific papers on a regular basis.

Be sure to check out MacResearch for more innovative applications geared especially towards science and research.
          Open Science at PSB - deadline approaching!   
The initial deadline for proposals for the first Open Science workshop at PSB is coming up on June 1. We welcome submissions on almost anything related to Open Science - tools, platforms, and resources; applications, first-hand experiences, or case studies; cultural, social, and historical perspectives or studies; Open Access and open source; pretty much anything that will help us get a better picture of how Open Science has developed, where it is now, what's brewing on the horizon, and what's needed going forward. The call for participation has more detailed information on the workshop and submission instructions.

Note that the proposal need not be a fully mature or completely fleshed out abstract - a rough outline of the content of the proposed talk is sufficient. The early deadline is simply for us to get a better idea of what the workshop will look like, and there is ample time to continue refining abstracts thereafter.

There is no selection process for posters; anyone interested in presenting a poster may present. The deadline for submitting a poster abstract is Sept 12; however, early submissions are encouraged so that we may better organize the workshop!

Fellow bloggers and readers - please take a moment to post a short note about our workshop on your own blogs, or send notice of our call for participation to potentially interested friends and colleagues! Thanks in advance. :)
          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          BEST PHP Training in Noida   
PHP (recursive acronym for PHP Hyper Preprocessor) is a widely-used open source general
          Dropping the F bomb   
One thing I talked about in the history of Geek Feminism that I presented at Open Source Bridge the other week was that, as far as I know, GF was the first group in the tech side of geekdom (tech industry, free and open source software, etc) to use the word “feminist”. In SFF fandom, […]
          Feminist Point of View – my slides from Open Source Bridge   
A couple of weeks ago I gave a talk at Open Source Bridge entitled Feminist Point of View: A Geek Feminist Retrospective. The presentation was a review of the 6 years of the Geek Feminism wiki and blog, and the lessons we’ve learned from doing this. I said I’d post the slides here on the […]
          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          Tibco Architect - GC/ US Citizen Only - BlueFusion INC - Johns Creek, GA   
Tibco Solutions Architect*. Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). Johns Creek, GA*....
From Indeed - Tue, 27 Jun 2017 19:49:57 GMT - View all Johns Creek, GA jobs
          Senior Developer -Java & TIBCO (Contract Project) - softvision (tams edition) - Johns Creek, GA   
Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). We are looking for an exceptional Senior Developer (Java &amp; Tibco) to work with...
From softvision (tams edition) - Tue, 27 Jun 2017 23:47:53 GMT - View all Johns Creek, GA jobs
          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          Update: sipphone (Utilities)   

sipphone 1.2.1


Device: iOS iPhone
Category: Utilities
Price: Free, Version: 1.2.0 -> 1.2.1 (iTunes)

Description:

sip:phone is a Unified Communication Client brought to you by Sipwise GmbH, the open source soft-switch vendor.

IMPORTANT NOTE:
sip:phone requires an existing account on any Sipwise-based system (e.g. a Sipwise sip:provider CE, sip:provider PRO or sip:carrier system on version 3.x or higher) and does NOT provide a VoIP service on its own.

HIGHLIGHTS:
• Secure VoIP softphone for voice, presence and instant messaging.
• Auto-Discovery of contacts in your address book.
• Encrypted SIP and XMPP communication for secure calls and messaging.
• Recent Calls List
• Recent Chat List
• Avatar support in Chat and Contact List
• Put calls on Mute and Speaker Phone
• DNS SRV Support for SIP and XMPP
• DTMF Support for operating voice menus an auto attendants

IMPORTANT VOIP OVER MOBILE/CELLULAR DATA NOTICE:
Some mobile network operators may prohibit or restrict the use of VoIP functionality over their network and may also impose additional fees, or other charges in connection with VoIP. You agree to learn and abide by your cellular carrier's network restrictions. Sipwise GmbH will not be held liable for any charges, fees or liability imposed by your carrier for use of VoIP over Mobile/Cellular Data.

EMERGENCY CALLS:
The sip:phone application delivers calls on a best-effort basis to the service you are signed up with. As a consequence, the sip:phone application is not intended, designed, or fit for placing, carrying or supporting Emergency Calls. Sipwise GmbH will not be liable for any costs or damages arising either directly or indirectly from the use of the software for Emergency Calls. Using sip:phone as a default dialer may interfere with dialing emergency services.

What's New

Small bug fix.

sipphone


          5 tool open source per il marketing   

I marketing stack sono un insieme di tecnologie che viene utilizzato da chi crea una campagna pubblicitaria per eseguire, analizzare e migliorare le attività di marketing. Questi tool includono anche strumenti di marketing automation, di data enrichment e data analytics. Oggi vogliamo presentarvi 5 tra i migliori tool open source dedicati al potenziamento dei marketing stack.

Usare tool open source permette non solo di abbattere i costi, ma anche di svincolare i dati dall’azienda e dei clienti dalle piattaforme proprietarie, rendendo di fatto più indipendente la propria attività.

Piwik

Partiamo da Piwik, si tratta di una piattaforma creata nel …

The post 5 tool open source per il marketing appeared first on Edit.


          (USA-FL-Tampa Bay) Application Security Engineer   
Job Purpose The Application Security Engineer is responsible for formulating security test strategies, designing security test plans and test cases, and executing security tests to validate the application is secured according to defined security policy. Job Responsibilities + Reviews security requirements of applications and project documentation and asks follow-up questions as needed to gain a full understanding of requirements and applications + Integrate security testing into the CI\CD process + Develop Ruby and\or Python code to support security testing automation + Performs code reviews of application source code + Develops standards for secure software coding + Defines and develops security test strategies for small-medium projects; provides input for large projects/programs + Develops security test plans and test cases and ensures coverage of requirements and application functionality + Executes automated and manual security tests according to test strategy + Finds ways to enhance the security testing framework and keep looking for ways to take it to the next level + Provides feedback to project team and other internal customers on the production readiness of software as it relates to security + Bachelor’s degree or equivalent work experience + 2 years of experience in technology role + 2 year developing code and scripting language, Java and Ruby preferred + High level of knowledge and abilities of application security + Able to establish test plans and design effective security test cases + Good verbal and written communication skills in English. + Experience leading small work teams + Experience using security tools like Fortify SCA, Burp, or other open source security tools + Strong data validation skills.
          Senior Principal Engineer (C++ or OpenSource)   
TX-Dallas, Our client is looking for a Senior Principal Engineer for a 12 month contract-to-hire or Direct Hire position in their downtown Dallas office. Technical team lead with experience in open source, micro-container based architecture, data, ETL – ideally leading a product development oriented team. Be the driver for developing the product. If he puts a scrum team together to go develop that—this perso
          Tool / Programm zum Testen und Aufrufen von SOAP – Web Services (SoapUI)   
Nicht immer hat man die Zeit und die Lust einen Web Service Client zu schreiben. Zum Glück gibt es ein Programm namens „SoapUI“, dass diese Aufgabe effizient erledigt. Egal ob SOAP, REST oder AMF, SoapUI benötigt nur die eine WSDL (Adresse oder Datei) und einen Endpoint.   SoapUI ist Open Source! …und darauf sollte man […]
          Copter 3.2.1 on APM sudden drop in Altitude in Alt-Hold up to a crash   

@Eddi_Maevski wrote:

Hi,

Using APM from CX-20 (open source) version 2.52, encountered few drops in Alt-Hold mode - copter just drops out of the sky, I've tried analyzing the logs yet can't make solid conclusion to what is wrong ?

Any help will be appreciated, logs attached.
2017-05-02 12-13-25.zip (3.0 MB)

Posts: 2

Participants: 2

Read full topic


          Press Release: Open Indicators Consortium   
Here's the press release from the Open Indicators Consortium:

June 14, 2011

From across the nation, local, regional and state data partners have collaborated with a team of 20 faculty and graduate students at one of the world’s top data visualization labs in the Open Indicators Consortium to create Weave (Web-based Analysis and Visualization Environment), a high performance web-based open source software platform. Weave allows users to explore, analyze, visualize and disseminate data online from any location at any time.

The Open Indicator Consortium’s goal is to transform publicly available data into visually compelling and actionable indicators to inform public policy and community-based decision makers. Since 2008, the Open Indicators Consortium (OIC) has brought together technical and academic experts, data providers and data users. With its technical lead and partner the University of Massachusetts Lowell’s Institute for Visualization and Perception Research, the OIC is soft-launching Weave 1.0 BETA in preparation for the official release of Weave 1.0 in the mid-fall.

The Weave core code is being released under the GNU General Public License version 3 (GPLv3), and the Weave API under the Mozilla Public License (MPL v 1.1).

Full documentation is available through http://www.oicweave.org. The code is available for download now at http://ivpr.github.com/Weave/. These releases provide all that is needed to implement Weave.

More information can be found here.
          Citizen DAN Proposal Intrigues Me   
There's an interesting proposal up for the Knight News Challenge awards this year. The proposal is for something called "Citizen DAN", with DAN standing for Public Data Appliance and Network.

You can read the proposal here, which includes external links for more information.

Here's a short description of the project:

Citizen DAN is an open source framework to leverage relevant local data for citizen journalists. It is a:



■Appliance for filtering and analyzing data specific to local community indicators


■Means to visualize local data over time or by neighborhood


■Meeting place for the public to upload and share local data and information


■Web data portal that can be individually tailored by any local community


■Node in a global network of communities across which to compare indicators of community well-being.


Good decisions and good journalism require good information. Starting with pre-loaded government data, Citizen DAN provides any citizen the framework to learn and compare local statistics and data with other similar communities. This helps to promote the grist for citizen journalism; it is also a vehicle for discovery and learning across the community.


Citizen DAN comes pre-packaged with all necessary deployment components and documentation, including local data from government sources. It includes facilities for direct upload of additional local data in formats from spreadsheets to standard databases. Many standard converters are included with the basic package.


Citizen DAN may be implemented by local governments or by community advocacy groups. When deployed, using its clear documentation, sponsors may choose whether or what portions of local data are exposed to the broader Citizen DAN network. Data exposed on the network is automatically available to any other network community for comparison and analysis purposes.


This data appliance and network (DAN) is multi-lingual. It will be tested in three cities in Canada and the US, showing its multi-lingual capabilities in English, Spanish and French.

What has me most excited is not just the project itself, but the growth in open-source solutions to data presentation/management for community indicators programs. These should lower the barriers to entry for many communities to establish/maintain a useful indicators set, and help spur increased innovation in both what we measure and how we use what we measure.

As more of these solutions move from the drawing board through testing and implementation, we'll share them here. In the meantime, I applaud the many folks out there doing good work to make my job both easier and more effective.
          Senior Software Test Automation Engineer - Jurong Island   
Familiarity with commercial and open source test automation and test case management technologies such as JMeter, Robot Framework, Selenium, Watir or Hudson etc...
From Jobs Bank - Tue, 27 Jun 2017 10:03:13 GMT - View all Jurong Island jobs
          Automation Test Engineer for open source frameworks (Investment Banking) - Pasir Ris   
Experience in Continuous Integration Tool – Jenkins / Hudson. Optimum Solutions (Co....
From Jobs Bank - Wed, 28 Jun 2017 09:54:55 GMT - View all Pasir Ris jobs
          Visual Studio 2010 Best Practices   

Originally posted on: http://iamsaif.com/archive/2012/11/16/visual-studio-2010-best-practices.aspx

I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook.

In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices. 

As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on.

Chapter 3 has good solid practices on low level constructs like generics and exceptions.

Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL.

Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD.

In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good.

I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape.

Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.     

Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring.

Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter

All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.

UPDATE: Dylan has a good comment ;-).

Here is the link: http://www.amazon.com/Visual-Studio-2010-Best-Practices/dp/1849687161/ (amazon)

http://www.packtpub.com/visual-studio-2010-best-practices/book (packtbub)

Cheers,

ET


          StackUnderflow.js: A JavaScript Library and Mashup Tool for StackExchange   

StackUnderflow.js is a JavaScript library that lets you retrieve – and render – questions from the StackExchange API directly on your website just by including a simple, lightweight .js script.

The library is fully documented, so for technical details please check out the StackApps entry for it, and follow the links to the GitHub repository. The rest of this post is about my motivation for the library, how I am using it on the blog, and some other thoughts about the API.

StackExchange (e.g. StackOverflow) has recently published an API (still in beta). It’s not very often that such a rich source of data suddenly becomes accessible. So it got me a little excited about all the possibilities. I think the full set of possibilities has yet to be realized, even by the rapidly growing set of entries in the StackExchange API Contest. Like most new things, it takes time. Plus, the API is currently read-only, but it seems they have plans to add write support in the future. Now that will be interesting. An idea for utilizing that in a novel way just popped into my head, just now.

Anyway – one thing I have noticed over the last few years is just how much StackOverflow.com has grown as a referrer to this blog. There’s an untold number of SO questions thank here as a reference. StackOverflow is consistently one of my top referrers. I joked once that my SO rep is grossly understated, for credit for all those answers, I get not.

Searching for “InfinitiesLoop” on StackOverflow returns over 100 results:

http://stackoverflow.com/search?q=infinitiesloop

So the first thought I had for utilizing the API is to bring those questions directly to this blog. I want readers to be able to see the SO questions that link to my blog in general, or to each individual blog entry. The nature of my blog entries are such that most of my referrers are from Google searches, people looking for answers to problems they have. It’s only natural to try and bring two sources that are very likely to help together, is it not? It’s a perfect marriage if you ask me!

Searching with the StackExchange API

As always seems to be the case, as soon as I start dipping my feet into some new technology, I immediately discover that what I want to accomplish is beyond its limitations. Fooey.

The StackExchange API does not support searching the body or answers of a question, only the tags and the title.

The reason stated is for performance – they suggest you use a proper search engine to look for questions by their content instead. That makes sense, I guess. Why reinvent the wheel, Google and the like are more than capable. The way you do it is pretty simple. All StackExchange questions are found under the “/questions” url, and you can restrict matches to that url. Here’s a Google search that finds all questions linking here:

http://www.google.com/search?hl=en&q=site:stackoverflow.com/questions+weblogs.asp.net/infinitiesloop

You’ll see in the results that all the urls look like this:

http://stackoverflow.com/questions/questionid/title

Ah ha – so doing this I can get the question IDs!

Abstracting it Away

So, the first thing I did was work around this limitation by utilizing an AJAX Google search to find the questions, then the StackExchange API to retrieve the questions. There’s two disadvantages to that: (1) The Google API I’m using is one that limits results to 8 per request, and (2) We must now perform at least two requests to get the data. But I think these disadvantages are no biggie – it wouldn’t be common to find more than 8 questions for one article, and even if there were, the top 8 results should be very relevant, and I don’t necessarily want to bombard you with every result anyway. And the added delay is no biggie – it’s still very fast, and this content is intended to be shown as a sort of ‘extra’ part of the site, which will silently load while the user is focusing on the main content.

Rendering the Questions/Data

Of course I fully anticipate there to be a very rich JavaScript API wrapper for StackExchange. I would love if StackUnderflow.js turned into one (hey, it’s open source). But there’s too many people out there that are smarter and have more time than me, so I doubt it. There’s already one posted that is automatically generated from the robust StackExchange API help.

Cool, but I hope these libraries realize that almost as important as getting the raw data is getting it onto the page. So StackUnderflow.js not only lets you get raw data, but has a built-in ability for rendering it, too. Rendering it in the familiar StackExchange way. Currently, it only supports rendering questions, but I intend on adding answers, comments, badges, etc, as well.

It also lets you customize the rendering via a very simple templating engine (by no means meant to be a generic templating engine, but more than adequate for what is needed here), and you can of course customize the CSS, or do both.

How I am Using It

This blog is hosted on Community Server from who-knows-who. I was fortunate to get this blog as a virtue of being a Microsoft employee (although I think its more open now). It sure has done wonders for my readership compared to when I had it on blogger. But this means I only have a little bit of control over it. One day I dream I’ll host it myself and thus have complete control over it, but for now, I’m stuck here.

The dashboard for this blog lets me enter any HTML, even script, into the ‘news’ section, which appears on the left navbar. The blog entries themselves all live under a <div> with id “content2”. I want whatever page the user is on to show StackOverflow pages linking to it. If they are on the main page, that will mean any and all links. If they are looking at the specific page for a blog article, it will mean links to that article only.

Some of the lines may wrap horribly – expand your browser if you can (ahemipadahem).

<link type="text/css" rel="Stylesheet" href="http://infinity88.com/stackunderflow/stackoverflow.css" />

<script type="text/javascript" src="http://infinity88.com/stackunderflow/stackunderflow-1.0.0.min.js"></script>

<script type="text/javascript">

stackunderflow.googleQuestions(null, function(questions) {

    if (questions.questions.length) {

        var content = document.getElementById("content2");

        var header = document.createElement("h3");

        var msg = (document.location.toString().toLowerCase() === "http://weblogs.asp.net/infinitiesloop/")

            ? 'Some <a href="http://stackOverflow.com">StackOverflow</a> questions that link to this BLOG...'

            : 'Some <a href="http://stackOverflow.com">StackOverflow</a> questions that link to this ARTICLE...';

        header.innerHTML = msg + " <br/><span style='font-size:8px'>(powered by <a href='http://github.com/infinitiesloop/stackunderflow.js'>StackUnderflow.js<a/>)</span>";

        content.appendChild(header);

        stackunderflow.render.questions(questions, "#content2");

    }

});

</script>

This code isn’t the prettiest – this is by no means meant to be something you copy and paste into your blogs. First of all, my blog being out of my control already does horrible things on the client side that I hate. Don’t treat your blog that way :)

The code to focus on is the included script, stylesheet, the ‘googleQuestions’ API, and the ‘render.questions’ call. Actually, it supports a shorter chaining syntax, which you’ll see in the readme on github. Note that in the call to ‘googleQuestions’ I pass null as the search term. The library uses the current page URL by default if you don’t provide one. Being in the side bar, this code appears on every page. So it will always show links to the page you are on. You could if you wanted make it more specific than that or include keyword, etc.. basically, anything you might Google for.

The ‘content2’ div is appended to (the library takes care of waiting for DOMContentLoaded, etc, for you). And since I haven’t specified a template to use, the default one is used, which uses a similar HTML structure that StackOverflow.com itself uses to show question summaries. It also uses a certain set of CSS classes, hence the link to the stylesheet. The classes aren’t the same as StackOverflow.com though – they’ve been modified to (1) be more uniquely named by having a ‘se-‘ prefix, and (2) only apply to content within the dynamically rendered content so it can’t possibly mess with the rest of your page.

Just look at the bottom of my main page to see it in action, and then try some of my popular articles like Truly Understanding ViewState.

I have lots of ideas for improving it, not the least of which is the ability to show all questions from a certain user, so you could show a list of your own questions, and filtering of those questions (e.g. only unanswered ones). But mostly I hope people pick up an interested in the project and contribute to it via GitHub!

Enjoy!


          Excelling at CucumberJVM GLOBAL Step Definitions   
Cucumber is going global baby!  They've a vision that any and all definitions are available at the beck and call of any feature file.  Other BDD implementations allow static linking of feature file to a specific set of code of definitions.  Let's give the idea of global definitions a try with CucumberJVM (Java) and see what we can learn, cover tools and tricks to work with them, and look at how to design the definition code to be maintainable.
(XXX Link to example code on GitHub goes here.)

Feature files, Steps, Definitions, and Test Runners, oh my!

Feature files are text files containing the BDD (well really Gherkin) Steps such as Give, When, Then.  Definitions are built in programming languages and define what those steps mean.  Test Runners (such as JUnit, or cucumber command line) launch a program that looks for feature files, parses the feature file, and execute each Step by executing a Definition that matches the Step.  BDD Test frameworks usually allow some configuration of how to match a Step with a Definition.  Cucumber's vision is that all Definitions should be global and that the feature file should contain enough context to do this correctly.

Feature file for planning

This is the feature file used in planning.  It reads pretty well and gives a team a starting point for conversations on a point of sales feature for a pet store.
Buying a dog at a pet store such as Petco give you deals such as this.
(click pic to enlarge)
After planning with a number of such documented features, Once the Sprint started, during development of test automation, I realized I needed more context.  If the feature file alone was going to "drive" definition discovery, having descriptive columns wasn't going to give me enough differentiation across all steps in a global context.  example:
When purchasing a "selected accessory"
would generate a match for all other definitions with the words "When purchasing a," totally missing the important piece "selected accessory."
The Natural editor reporting multiple Definitions matching this Step.

Adjusting the feature file by pulling the descriptive columns "out" will allow us to work with global definitions.

How to know a Step has enough "closure?"

First off, we'll never be perfect as reality always brings new adventure. But you'll get closer faster by: reading each step alone, ignoring the context of the scenario title, feature file name, and feature file location.  This is how I realized that "When purchasing a" had too little context as the nice context of the column name wasn't going to help me.

So global definitions will make your feature files a little more wordy.  And you will be forced to make decisions in the future when you discover collisions.  Feature file editors like Natural will complain to you when you add one that has ambiguous definitions.

Test Automation Design

First off, let's get a feedback loop working.  (My code is on GitHub at: XXX)  Put the feature file into source control, add a test runner (or if you got the cucumber plugin working in eclipse, that will work too), and execute your test.  Observe that the feature file is executed but the scenarios are skipped as there are no definitions.  Also the console will give you stub code for the methods.

Organize feature files in a sensible hierarchy

JUnit test runner

JUnit test case which hands off to Cucumber
Natural gives feedback that definitions are missing
(If you add definitions, sometimes the feature file needs to
be reopened to force Natural to repairs and check.)
Global definitions change how automation is designed and built.  When using BDD tools that allow the developer to control linking Steps to Definitions, you'd typically see a one to one mapping of feature file to the java file containing the class of definitions:
com/features/purchasing/BuyDog.feature
com/features/purchasing/BuyDogStepDef.java
com/feature/pageobjects/....   
With Cucumber, you're encouraged to build classes in this manner:
com/features/purchasting/BuyDog.feature
com/features/definitions/GivenBuysDog.java
com/features/definitions/WhenBoughtSelectedAccessory.java
com/features/definitions/ThenDiscount.java
com/feature/pageobjects/....  
Although this explosion of smaller objects isn't necessary a bad thing, it leaves us with a problem:  "When" at line 8 needs to communicate with the "Then" at line 9, so these smaller objects need a way to communicate with each other.  A Singleton pattern could do this but puts more burden on the programmer, as now lifecycle management needs to be done to maintain isolation between tests (so that running one scenario doesn't cause a side affect with another scenario due to mismanaged state in a Singleton).  A better alternative is to work with Cucumber's lifecycle for doing this via dependency injection.

Working inside a World

The World lifecycle pattern is simple: the state to be shared between Definitions is stored in the World, and the world is created at the start of executing a scenario and then destroyed upon completion of the scenario.  Upon execution of a new scenario, a new World is created again, and so on.  Although the World pattern is heavily emphasized in Cucumber.JS, it's not so explicit for CucumberJVM.  Cucumber manages the World lifecycle automatically if you use Dependency Injection.  PicoContainer (built by the authors of Cucumber) is a simple and lite weight framework that gets the job done via constructor injection.

Here's how

Add picocontainer to your build dependencies (Because I found PicoContainer.org hard to work with, I used Maven.org to search for the latest versions of "cucumber-picocontainer" and "picocontainer."):
Add two jars to activate World lifecycle and Dependency Injection
Take a look at your three definitions and create a new class for passing information.
Three Step Definitions
Since in this case it's about a shopping cart of items, lets go with that.
For now put the object in the same package as its steps in the top level package for definitions.  Later we'll reorganize but for now keep writing code because it will be easier to re-organize after more of the design has emerged.  Since PicoContainer uses constructor injection, add Constructors for the data injection.
Constructor Injection
(Click pic for larger resolution)
This is all the "structure" code needed for PicoContainer and Cucumber.  When Cucumber executes a feature file with these steps, and it matches to these definitions, it will use PicoContainer to find and inject the dependencies when it constructs these classes, and these dependencies will be inserted in the World during Scenario execution.

Here are the Given, When, Then definitions using the dependency:


 Execute the test runner and you'll see this is enough for the first scenario outline.


To illustrate the World is being destroyed, a short experiment such as injecting a counter to count how many times the Given is called will make this clear.

Counter is always one because it's replaced each time the Scenario is run
(Click to see full size pic.)
Although Counter is always incremented in the definition for Given, and checked in the definition for the Then, it is always set to 1 because each row of a scenario outline gets it's own World.

Ramifications of Global Definitions on Design

A good design does at least these two things well (in this order) that allow a program to respond to change:
1) communicates intent in an understandable way,  and
2) is maintainable.
Another 50 pages could be written about other important characteristics--the book Clean Code is a good reference--but let's keep it to the point: we don't program in binary because it's difficult to understand intent and if we wrote in binary anyhow, eventually you'll be hating life when you have to respond to new requirements.

Global definitions mean our feature files could have a relationship with any definition (Java class with a @Given, @When, @Then.  So organize the features files in a way that makes the feature files an index into your product's features.  Feature files will be the index into your definitions (Java code) as well.  To organize the java code so it communicates intent and is maintainable, use the principle of "keeping things that work together next to each other."  Said another way, keep definitions grouped with the things your injecting into them.  This is a big departure from BDD frameworks that don't do global definitions, where usually the feature files and definitions are grouped together in "src/java/com/feature/purchase/buydog."  With global definitions, doing so would actually misinform.  With global definitions, it'd be better to drop everything in one namespace.  But lets try something better than that.

For example, organize feature files thusly (there may not be an advantage to having feature files children of src/java directory, but I did this out of habit):
src/java/com/features/purchase/BuyDog.feature
src/java/com/features/purchase/BuyCat.feature
src/java/com/features/purchase/BuyFish.feature
src/java/com/features/returns/ReturnFishTank.feature
src/java/com/features/returns/ReturnDog.feature

For Java code, I looked at each set of definitions and their collaborators and tried to group them in a sensible way:
Then later, when I added automation for the few selected accessories a few are not


I had to decide if I wanted to collaborate between the When and Then with the shopping cart, I needed to drop them in the same "shopping" namespace as the previous scenario.  The fact that I'm using the same Given definition, then that reinforces that decision.  This all makes sense since they are all about the same thing.  But get used to the idea that just because steps are in the same feature file, their definitions could be anywhere.

Times goes on keep the stair rails polished

Since feature files are a reflection of a product's features, BDD test automation needs to respond to three kinds of changes:
  • new behaviors/features, 
  • adjusting existing behaviors/features, and 
  • adjust how existing behavior/feature operates.

New behaviors versus adjusting existing behaviors

Organize feature files in a sensible hierarchy with good feature file names so it's easily browsable and searchable in order to answer the question, "is this new idea the PO has a new behavior or a change in an existing behavior?"
src/java/com/features/purchase/BuyDog.feature
src/java/com/features/purchase/BuyCat.feature
src/java/com/features/purchase/BuyFish.feature
src/java/com/features/returns/ReturnFishTank.feature
src/java/com/features/returns/ReturnDog.feature 
...
(It'll be hard to organize features without knowing the business you're building behaviors for.  Go find someone to help/interview about that as this knowledge typically isn't in the IT part of the organization.)

The business want's to collect customer contact info so they can send them offers via physical mail or email.  To do that, they make this offer to the customer at time of checkout by offer VIP cards that give an additional 5% discount on purchases to collect your contact info and send you more offers. 
If it's known that there will be ten more VIP card behaviors, better to make a directory just for different VIP features.  But if all we know at the time is that there is just this one feature, then we can just add the behavior into an existing feature file as shown (we can always move the feature files around later).
Adding another scenario outline (highlighted) to BuyDog
The global definitions related to these steps need to be updated to pass information about the VIP card status (the definitions for the Given and Then).  Because the definitions are global, finding the impacted definitions should be driven from the feature file.  If you've a good feature file editor like Natural, you can open those steps so you can implement a good way to inject (via PicoContainer) an object to pass along VIP card status.  If you haven't a good editor, then use your IDE to search for the step, filtering by .java file.  If you use IntelliJ, it has built in support for Gherkin so that you can put the cursor on a feature files step, then with a CTRL/CMB-B, get to the definition defined by your Java code.  If you have nothing but vanilla Eclipse (no Natural plugin), here is how to work with search:




Answering the question, "what feature files use this definition?" is a bit harder in that you need to avoid the regex expressions.  I'm not aware of any tools that help.


Adjusting only implementation
In this case, the behavior has been implemented but, darn it, the implementation just seems lacking or is in need of an update.  Assumedly the PO knows it's an update of an existing implementation by browsing through the feature documentation (maybe it's been turned into a GitBook).  The team brings the story in with a reference to the existing feature file and a simple bullet points on how to change the implementation.  During the sprint, the developers start at the feature file and from there open the definitions, read the Java code, and then make changes to the definition to get the test failing because the definition has been updated to the new implementation.  With the automation complete, the developers build the functionality (assumedly using TDD so they have micro tests which keep test automation in a sleek and pointy pyramid shape.)

Closing Thoughts

With support from a feature file editor that will escort you to the definitions, World lifecycle management enabled via dependency injection, and the fact that Cucumber is maintained by developers with significant control of the direction and vision of the company (Aslak Hellesøy,  Joseph Wilk, Matt Wynne,Gregory Hnatiuk, and Mike Sassak) I'd say give global definitions a try.  It's easy to dismiss trying something like this out of hand.  In fact some have mentioned how they've tried global definitions and failed, (see section "Global scoping is Evil").  In the case they were doing BDD incorrectly (not doing *B* DD at all in fact) as shown in the below, complaining about "click the search button."
Implementation details about UI aren't behavioral
Building "un"behavioral tests is a common early adopter's mistake which can happen to even experienced people who haven't stepped out of the box.  Games like the Behavioral or Not? teach how to build feature files at the correct level.  Building tests in the way the author of the above wanted to wasn't maintainable so this effort was in bad shape with or without global definitions.  Global definitions forced them to fail faster which was a good thing as they would give up rather than build a bunch of automation that's expensive to maintain.  This is likely one of the reasons Cucumber removed the ability to not use global definitions.  If you're still unsatisfied, use some strategies to teach Cucumber about boundaries, use a different BDD framework such as JBehave, or change Cucumber (it's open source) yourself to meet your needs.

References

picocontainer-for-singleton-di
how-to-pass-variable-values-between-steps-in-cucumber-java
Cucumber BDD environment installation
Global Definitions are EVIL

Troubleshooting

JUnit green bar stops rendering and tests not working

Click in the Test Selection window and look for a stack trace in the Failure Trace pane.  In cases like this, something has happened before JUnit execution has even started.  You'll need to correct the problem exposed in the Failure Trace.

Arity Problem

Failure Trace shows Arity problem
Fiddling with feature steps that already have definitions may result in the above when there is a mismatch between the number of parameters Cucumber is trying to pass from the step in the feature file into the definition.  Check the definition's parameter list and the feature file to see which needs to be straightened out.


          Quirky Crowd Controlled Robotic Arm With Arduino   
This is a crowd controlled robotic arm. The hardware for this project has been taken from another open source project, which can be found on GitHub. The arm can be controlled by a large number of people simultaneously over the web by using a web-based form to submit their choices. It was used as an ...
By: abhinavgandhi09

Continue Reading »
          2 different kind of J2EE developer   

After working for some few years in the J2EE space I manage to identify two uniquely different kind of developer:

1. Maintainer - Developers that falls under this category are developers who "loves" doing support and tweaking an existing system and extending them to make it more useful. They are very good at maintaining the systems and often times they come up with different kind of support tools that are really creative and very useful in troubleshooting things. They are the kind of developer that are happy and content in opening issues, fixing things, testing things to make sure nothing are broken, going through the code base understanding each and everything and not to mention keep a damn good update on the wiki pages.

They are not the type that can jump into a brand new project and start designing, architecting and integrating the different open source tools available to deliver solutions, they hate that. They are not that interested in the latest and greatest and sometimes they resist on using a new version of the open source framework that is used in the project. Developers who have been working in an end-user environment (non-IT company) and working there for more than 3-5 years falls under this category, as most of them are content and happy with what they have and will lovingly support the app that are there.

2. Innovator - This kind of developers are the free willing and sometimes they are the cream of the crop. They don't like to be bound to anything, they hate supporting application and don't want to stick around in one place for a number of years. They are the developers that are hungry for innovation and challenges, the more they are challenged technically the more they are hungry for it and they will not stop until they know how things work. Developers that falls under this category are mostly either working for an IT shops, enterpreneur or contractors. They love challenges and always looking around for it. These kind of developers knows which technology works best under what circumstances, and they really know how to hackup a solution that work.

From the above category I can say that it's an 80-2 rule, 80% of the developers I've met falls under Maintainer while 20% falls under Innovator. It's very easy to recruit Maintainer but it's FRICKIN' difficult to recruit and retain Innovator


           Australian Open Source Industry & Community Report 2008    

Get the latest report here


          Farewell Mr. J and Hello VM   

The buzz around the Java world nowadays is the coming up of new scripting language to do development on Java, and DZone has an interesting article Farewell to the 'J' in 'JVM'. I've always been preaching to my friends that the best thing to invest our time in nowadays is not to learn a new language but to learn more about virtual machines and how it works, as programming language is just an abstract tool for us to get our job done quickly, but what going on under the hood is what matters at the end of the day. I'm glad that now people are realizing the potential of Java not as a programming language but as a PLATFORM that you can built anything you can dream of.

The transition is not going to be easy for most people as that's the nature of our open source socio-economic, nevertheless people will realize it and will gladly accept it as part of them sooner than later.


          What if Google buy SpringSource ?   

SpringSource just bought Covalent which is a very smart move done by them as it will give them great exposure and a perfect fit for them in times to come. The support that Covalent has provided over the years can be seen from their list of customers and this speaks louder than words. The SpringSource team will now be in a better position in terms of being a "middleware" player (in terms of product portfolio,expertise and support categories), they are still far away from players like WebMethods, Tibco, etc but from what I can see they are creeping up and filling up that gap. They still have a long way to go, but they are getting there.

The question that begs an answer is what does it takes for SpringSource to compete with big guns like WebMethods, Tibco, and the rest, it's Google. Google have the resources and money like the big players and not to mention they have a number of smart developer who are involved in quite a lot of open source projects and not to mention too that they have the technology to build scalable, reliable, etc solutions all the thing that is needed for big companies in the world needs, all Google need to do is built an offering from their knowledge and from SpringSource. I'm sure it's not something hard for them to do if they want to do it. I think both companies provides blend of talent and products that can benefits all the companies out there.


          Motivation of open source software developers   

Checkout the following paper titled NORMS, REWARDS, AND THEIR EFFECT ON THE MOTIVATION OF OPEN SOURCE SOFTWARE DEVELOPERS


          Microsoft tries to put the future on hold   

Found an interesting article in ITWeek written by Tim Anderson. In the article it mentioned about Chris Wilson saying and I quote “In our opinion, a revolution in EcmaScript would be best done with an entirely new language”. Excuse me ? a new language ? don't we have enough programming language as it is ? who needs another language Chris ?. Javascript have been doing it's job beautifully for the last several years, and like any other programming languages there are some limitation to it but there are always rooms for improvement.

I think Microsoft is bit hesitant to go ahead and implement EcmaScript 4 because they are not that strong in the web space compare to the desktop space. They still are hoping that the desktop application will once again prevail as how it was back during the pre-95' era. Well as they say you can't change history but you can make it better, and who can make it better ? not you of course, the open source community will.


          Mastering Kali Linux for Web Penetration Testing   

Master the art of exploiting advanced web penetration techniques with Kali Linux 2016.2 About This Book Make the most out of advanced web pen-testing techniques using Kali Linux 2016.2 Explore how Stored (a.k.a. Persistent) XSS attacks work and how to take advantage of them Learn to secure your application by performing advanced web based attacks. Bypass internet security to traverse from the web to a private network. Who This Book Is For This book targets IT pen testers, security consultants, and ethical hackers who want to expand their knowledge and gain expertise on advanced web penetration techniques. Prior knowledge of penetration testing would be beneficial. What You Will Learn Establish a fully-featured sandbox for test rehearsal and risk-free investigation of applications Enlist open-source information to get a head-start on enumerating account credentials, mapping potential dependencies, and discovering unintended backdoors and exposed information Map, scan, and spider web applications using nmap/zenmap, nikto, arachni, webscarab, w3af, and NetCat for more accurate characterization Proxy web transactions through tools such as Burp Suite, OWASP's ZAP tool, and Vega to uncover application weaknesses and manipulate responses Deploy SQL injection, cross-site scripting, Java vulnerabilities, and overflow attacks using Burp Suite, websploit, and SQLMap to test application robustness Evaluate and test identity, authentication, and authorization schemes and sniff out weak cryptography before the black hats do In Detail You will start by delving into some common web application architectures in use, both in private and public cloud instances. You will also learn about the most common frameworks for testing, such as OWASP OGT version 4, and how to use them to guide your efforts. In the next section, you will be introduced to web pentesting with core tools and you will also see how to make web applications more secure through rigorous penetration tests using advanced features in open source tools. The book will then show you how to better hone your web pentesting skills in safe environments that can ensure low-risk experimentation with the powerful tools and features in Kali Linux that go beyond a typical script-kiddie approach. After establishing how to test these powerful tools safely, you will understand how to better identify vulnerabilities, position and deploy exploits, compromise authentication and authorization, and test the resilience and exposure applications possess. By the end of this book, you will be well-versed with the web service architecture to identify and evade various protection mechanisms that are used on the Web today. You will leave this book with a greater mastery of essential test techniques needed to verify the secure design, development, and operation of your customers' web applications. Style and approach An advanced-level guide filled with real-world examples that will help you take your web application’s security to the next level by using Kali Linux 2016.2. Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Mastering Python Networking   

Become an expert in implementing advanced, network-related tasks with Python. About This Book Build the skills to perform all networking tasks using Python with ease Use Python for network device automation, DevOps, and software-defined networking Get practical guidance to networking with Python Who This Book Is For If you are a network engineer or a programmer who wants to use Python for networking, then this book is for you. A basic familiarity with networking-related concepts such as TCP/IP and a familiarity with Python programming will be useful. What You Will Learn Review all the fundamentals of Python and the TCP/IP suite Use Python to execute commands when the device does not support the API or programmatic interaction with the device Implement automation techniques by integrating Python with Cisco, Juniper, and Arista eAPI Integrate Ansible using Python to control Cisco, Juniper, and Arista networks Achieve network security with Python Build Flask-based web-service APIs with Python Construct a Python-based migration plan from a legacy to scalable SDN-based network. In Detail This book begins with a review of the TCP/ IP protocol suite and a refresher of the core elements of the Python language. Next, you will start using Python and supported libraries to automate network tasks from the current major network vendors. We will look at automating traditional network devices based on the command-line interface, as well as newer devices with API support, with hands-on labs. We will then learn the concepts and practical use cases of the Ansible framework in order to achieve your network goals. We will then move on to using Python for DevOps, starting with using open source tools to test, secure, and analyze your network. Then, we will focus on network monitoring and visualization. We will learn how to retrieve network information using a polling mechanism, flow-based monitoring, and visualizing the data programmatically. Next, we will learn how to use the Python framework to build your own customized network web services. In the last module, you will use Python for SDN, where you will use a Python-based controller with OpenFlow in a hands-on lab to learn its concepts and applications. We will compare and contrast OpenFlow, OpenStack, OpenDaylight, and NFV. Finally, you will use everything you’ve learned in the book to construct a migration plan to go from a legacy to a scalable SDN-based network. Style and approach An easy-to-follow guide packed with hands-on examples of using Python for network device automation, DevOps, and SDN. Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Shopping Cart + InvoicePlace   

I'm not sure if IP is really a good choice here.
I recommend you using one of the many eCommerce open source software out there.

For example, Magento or Prestashop.
They all have customer groups, where you can apply special discounts.
And all of them have an default invoice system build in


          Microsoft Cloud Day - the ups and downs   

Originally posted on: http://blog.crip.ch/archive/2012/06/23/microsoft-cloud-day---the-ups-and-downs.aspx

The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day.

I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable.

I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure.

Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least.

As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations.

Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well.

So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

 


          Microsoft and the open source community   

Originally posted on: http://blog.crip.ch/archive/2012/03/28/microsoft-and-the-open-source-community.aspx

For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.
For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.


          This blog is dead - follow my new blog www.visualstudiogeek.com   

Originally posted on: http://staffofgeeks.net/archive/2016/06/13/this-blog-is-dead---follow-my-new-blog-www.visualstudiogeek.com.aspx



In this era of fast innovation, platforms that seize to innovate & advance are usually disrupted. Although the blogging platform GeeksWithBlogs.Net seemed promising at first but has had no advancement in the last two years, worst even the bugs reported on the platform haven't been fixed.   

I am giving up on GeeksWithBlogs.Net! Declaring this blog as dead... I have moved over to a new platform that's far more exciting and supported by the open source community. The new blog uses GitHub Static pages by leveraging the framework 'Jekyll'. The framework supports Markdown and is backed by a code repository that means all blogposts are backed by a version control. GitHub even offers free hosting! 

I promise to keep the content on the blog fresh and exciting... If you are still interested in reading about the latest and greatest in DevOps and ALM or VSTS and TFS, www.visualstudiogeeks.com will be a useful feed to follow. Hoping to see you on my new blog, follow here - http://feeds.feedburner.com/visualstudiogeeks/otas 

Cheers, Tarun  

          New book ALM and DevOps with Team Foundation Server 2015 Cookbook   

Originally posted on: http://staffofgeeks.net/archive/2016/02/09/alm-and-devops-with-team-foundation-server-2015-cookbook.aspx

Announcement - Book on implementing ALM and DevOps using Team Foundation Server 2015

I am delighted to announce that my first book on Team Foundation Server 2015 has now shipped!

I have been working with Team Foundation Server for over a decade, helping customers unlock the true potential of the product. I am a Microsoft Most Valuable Professional in Visual Studio and Development Tools for over 5 years now, working closely with Microsoft Product Teams to help shape the product to be most relevant to the users. I have worked with a broad range of customers in financial, trading, telecommunications and social sectors. While customers have varying levels of maturity in software application lifecycle management there is however a broad overlap in the problem areas hindering their ability to achieve continuous delivery of high quality software.

I have used my experience and learning's from these engagements to author over 80 hands-on DevOps and ALM focused labs for Scrum Teams to enable software teams to champion the implementation of modern application lifecycle and DevOps tooling using Team Foundation Server 2015.

This book is a recipe-based guide that uses a problem-solution format to call out inefficiencies in the software development lifecycle and then guides you, step-by-step, on how you can use Team Foundation Server to your advantage in those areas. This book is aimed at software professionals including Developers, Testers, Architects, Configuration Analysts, and Release Managers who want to understand the capabilities of TFS to deliver better quality software faster.

Team Foundation Server 2015 Cookbook


The book has 340 pages divided into 8 chapters…

 

  1. Chapter 1: Team Project Setup - This chapter covers how to set up a Team Project, which is a logical container isolating all tools and artifacts associated with a software application together in a single namespace. Features such as Welcome pages, Dashboards, Team Rooms, and many more enable better collaboration within Teams, whereas the ability to rename Team Projects and scripting Team Project creation empowers you to better administer a Team Project. In this chapter, we’ll learn the different features of a Team Project and how to set up these features to leverage them to their full potential.

  2. Chapter 2: Setting Up and Managing Code Repositories - TFS is the only product to offer a centralized as well as distributed version control system. In this chapter, we’ll learn how to set up both TFVC and Git repositories in a single project and how to tackle technical debt by enforcing code reviews and code analysis into the development workflows.

  3. Chapter 3: Planning and Tracking Work - Requirements that are implemented but never used, or those that are used just long enough to identify that they don’t satisfy the needs of the users cause and waste, re-work, and dissatisfaction. In this chapter, we’ll learn how to set up and customize multiple backlogs, Kanban, and Sprint Task Board. We’ll also learn how to integrate with external planning tools using Service Hooks, and how to improve the feedback loop by leveraging the feedback features in TFS.

  4. Chapter 4: Building Your Application - This chapter introduces the new build system (TFBuild), which is a cross platform, open, and extensible task-based execution system with a rich web interface that allows the authoring, queuing, and monitoring of builds. In this chapter, we’ll learn how to set up and use TFBuild for continuous integration. We’ll also learn how to integrate TFBuild with SonarQube and GitHub. We’ll also review features that help lay the foundations for continuous delivery of software.

  5. Chapter 5: Testing Your Application - Low quality software just isn’t acceptable. But you may ask “what is the right level of quality?” In this chapter, we’ll learn how to plan, track, and automate using the testing tools available in TFS. We’ll also learn how to leverage the new build system to integrate non-Microsoft testing frameworks, such as Selenium and NUnit, into the automation testing workflows.

  6. Chapter 6: Releasing Your Application - The new web-based Release Manager in TFS that uses the same agent and task infrastructure offered by TFBuild. In this chapter, we’ll learn how to set up, secure, and deploy to multiple environments using release pipelines. We’ll also learn how to track and report on releases delivered through the release pipeline. The techniques in this chapter enable you to set up your software for continuous delivery.

  7. Chapter 7: Managing Team Foundation Server - Teaches you how to update, maintain, and optimize your TFS, enabling high availability for geo-distributed Teams and reducing the administration overheads.

  8. Chapter 8: Extending and Customizing Team Foundation Server - It is not uncommon for organizations to have different tools to manage different parts of the life cycle, for example, Jira for Agile project management, TeamCity for builds, Jenkins for release management, and ServiceNow for service management. In this chapter, we’ll learn about the TFS object model and TFS REST APIs to programmatically access and integrate with systems. In this chapter, we’ll also cover how to customize Team Projects by leveraging Process Template customization

 

Call for Action…


 

Note of Thanks


I would like to take a moment to thank a few people who helped me in completing this project…

  • I am grateful to Packpub Publishing for publishing this book.

  • This book is dedicated to my mother Mrs. Raj Rani Arora and my father Mr. Inder Jit Arora without whom I wouldn’t be what I am today. This book would never have been complete without the support of my lovely wife Anuradha Arora. I would also like to thank my family and friends for their encouragement throughout the process.

  • The Microsoft Product Team in special Brian Harry, Buck Hodges, Aaron Bjork, Chris Patterson, Gopi Chigakkagari, Ravi Shanker, Karen Ng, Charles Sterling and Will Smyth have been extremely helpful in guiding the direction of this book.

  • I would also like to thank ALM Champs and ALM Rangers for their technical inputs and review on the book, specially Josh Garverick, Utkarsh Shigihalli and Willy Peter Schaub.

 

About the Author


Tarun Arora is obsessed with high-quality working software, continuous delivery, and Agile practices. He has experience managing technical programs, implementing digital strategy, and delivering quality @ scale. Tarun has worked on various industry-leading programs for fortune 500 companies in the financial and energy sector.

Tarun is one of the many geeks working for Avanade in the United Kingdom. Avanade helps clients and their customers realize results in a digital world through business technology solutions, cloud, and managed services that combine insight, innovation, and expertise in Microsoft technologies. For the past 5 years, Tarun has been a Microsoft Most Valuable Professional in Visual Studio and Development Technologies. His core strengths are enterprise architecture, .NET, WPF, SQL, and PowerShell. He was awarded the MVP of the year award by Microsoft in 2014 for going over and above in supporting the product teams and the community with his contributions. He is also an ALM Ranger and has contributed to key guidance and tooling projects focused on Azure, Team Foundation Server, Visual Studio Team Services, and Visual Studio Extensibility. Tarun is an active open source community contributor, speaker, and blogger. Follow him on twitter at @arora_tarun and his blog at Visual Studio Geeks - Blog for the latest and greatest in technology trends and solutions on DevOps and ALM.

Tarun loves photography and travel. He is a very active traveler and has travelled to more than 21 countries in the last few months. Parts of this book have been written on his journeys across three continents. While some chapters were written on the beaches of Mauritius, others were written in transit, airport lounges, and taxis. Follow his adventures on his travel blog #OutOfOffice Traveller - Blog.

Please drop me a comment if you have any questions, need more information or have any feedback…


Namaste!

Tarun


          [Freelancer] Bitcoin Expert or Team Needed For 2-Way Pegs Development   
From Freelancer // I am going to develop 2-Way Pegs Based on RSK Open Source. You should know well about blockchain, sidechain, Ethereum and 2-way Pegs. Don't bother me if u don't know this knowledges. I really will be glad if u know well about them and you must also be IT Developer...
          Python Bytes: #32 8 ways to contribute to open source when you have no time   
<p><strong>Brian #1:</strong> <a href="https://medium.com/@plotlygraphs/introducing-dash-5ecf7191b503"><strong>Introducing Dash</strong></a></p> <ul> <li>UI library for analytical web applications</li> </ul> <p><strong>Michael #2:</strong> <a href="https://lwn.net/Articles/723949/"><strong>Keeping Python competitive</strong></a></p> <ul> <li>Article on LWN, interview with Victor Stinner</li> <li>He sees a need to improve Python performance in order to keep it competitive with other languages.</li> <li>Not as easy to optimize as other languages. For one thing, the C API blocks progress in this area</li> <li>Python 3.7 is as fast as Python 2.7 on most benchmarks, but 2.7 was released in 2010. Users are now comparing Python performance to that of Rust or Go, which had only been recently announced in 2010. </li> <li>In his opinion, the Python core developers need to find a way to speed Python up by a factor of two in order for it to continue to be successful.</li> <li>JITs may be part of the answer, notably Pyjion by Dino Viehland and Brett Cannon</li> <li>An attendee suggested Cython, which does AoT compilation, but its types are not Pythonic. He suggested that it might be possible to use the new type hints and Cython to create something more Pythonic.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://hynek.me/articles/sharing-your-labor-of-love-pypi-quick-and-dirty/"><strong>PyPI Quick and Dirty</strong></a></p> <ul> <li>A completely incomplete guide to packaging a Python module and sharing it with the world on PyPI. - Hynek Schlawack</li> </ul> <p><strong>Michael #4:</strong> <a href="https://github.com/keon/algorithms"><strong>Minimal examples of data structures and algorithms in Python</strong></a></p> <ul> <li>Simple algorithmic examples in Python, including <ul> <li>linked lists</li> <li>reversing linked lists</li> <li>GCD</li> <li>Queues</li> <li>Binary search</li> <li>depth first search</li> <li>many, many more</li> </ul></li> </ul> <p><strong>Brian #5:</strong> <a href="https://opensource.com/article/17/6/find-time-contribute"><strong>8 ways to contribute to open source when you have no time</strong></a></p> <p><strong>Michael #6:</strong> <a href="https://www.numfocus.org/blog/numpy-receives-first-ever-funding-thanks-to-moore-foundation/"><strong>NumPy receives first ever funding, thanks to Moore Foundation</strong></a></p> <ul> <li>For the first time ever, NumPy—a core project for the Python scientific computing stack—has received grant funding.</li> <li>The proposal, “<a href="https://www.moore.org/grant-detail?grantId=GBMF5447">Improving NumPy for Better Data Science</a>” will receive $645,020 from the Moore Foundation over 2 years, with the funding going to UC Berkeley Institute for Data Science. </li> <li>The principal investigator is <a href="https://bids.berkeley.edu/people/nathaniel-smith">Dr. Nathaniel Smith</a>.</li> <li>The NumPy project was started in 2006 by <a href="https://www.numfocus.org/about/people/advisory-council/">Travis Oliphant</a>.</li> </ul>
          Head of QA Engineering   
Role: Head of QA Engineering

Location: NYC

Tax Terms: Fulltime

Must Haves

* Must be hands-on with manual & automation testing AND Must be able to set-up, manage and develop the testing platform
* Develops test strategies on multiple platforms: Kiosks, Web, Salesforce (used for CRM/billing/customer experience via communities)
* Must know JIRA -- Must know how to read code in Python, Java, C++/C# to do testing on top
* Must know Selenium
* Other tools: Cucumber (highly preferred), Zephyr (not required), Xray (not required)
* Nice-to-haves: AWS (comfortable with software on AWS), Docker & Webservices

JOB DESCRIPTION

Head of QA Engineering

Our client is an innovative, fast-paced company searching for a driven Head of QA Engineering. Will be working in a fast-paced environment together with our Engineering team. You will guide the quality metrics and controls of our platform by developing and executing customer-focused test strategies - this won't be a simple case of executing test plans that have been predefined for you. You will be leading and driving our testing initiatives, and utilize various testing techniques to verify platform features and functionality. The successful candidate for this role is highly motivated and passionate about software quality, and has an insatiable desire to continuously improve complex systems.

What You Will Do:

* Plan, develop and execute of manual/automation test and performance testing of front-end and backend systems.
* Implement QA process and design test strategies, execution methods, and success measures of both functional testing and non-functional testing.
* Track, document deliverables and develop highly detailed QA plans regarding QA processes, approaches, tools, resources, ownership, scheduling, and criteria to measure success.
* Work with the broader Engineering team to create test plans for existing and new functionalities.
* Achieve quality assurance operational objectives by contributing information and analysis to strategic plans.
* Manage testing feedback, lead defect prioritization, and communicate regularly with discipline leads to ensure proper resolution in a timely fashion.
* Manage JIRA bug tracking process, track issues status and resolution progress, and create quality matrices for the project team.
* Recommend best practices and approaches.

Who You Are:

* You are passionate about software testing and accelerating the pace of delivery.
* You stay up-to-date with the latest and greatest in technology, and you naturally seek new ways of improving your work.
* You truly care about how people interact with technology and you have a proven record of developing QA solutions in a customer-centric environment.
* You thrive in a fast-paced, collaborative environment where open communication is encouraged.
* You are an entrepreneurial team player and know how to multi-task.
* You are energetic, flexible, collaborative and proactive with flawless execution.
* A strong focus on business outcomes.
* Excellent judgment and creative problem solving skills.
* BS degree in Computer Science or Engineering field.

Tech Specs:
You have strong, hands-on experience with the following:

* Linux and Windows environments
* Programming languages (Python, Java, C++/C#) to automate manual testing or tasks
* Testing tools like Selenium
* Open source test tools and frameworks
* DevOps Testing workflows
* Amazon Web Services
* Docker / Micro Services

Email: farhana.shaik@harveynashusa.com
          IT Manager   
US citizens and Green Card Holders and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time”

IT Techcnical Manager Product Technology

Location: Princeton, NJ

JOB DESCRIPTION

The platform integration and infrastructure group is seeking an IT manager who can manage a talented and experienced team of developers, architects, designers and devops, in the development and support of applications developed by the team.

The successful candidate will be a IT manager who has excellent resource management & mentoring skills, can take ownership and responsibility of diverse portfolio of integration components such as APIs, widgets & feeds, web sites and infrastructure framework software.

Responsibilities/Accountabilities:

* Manages and mentors a highly skilled, talented and experienced team of application developers
* Design, develop & support integration components complied with industry standards
* Design, develop & support infrastructure software on which apps are developed by other teams
* Builds and executes rigorous and thorough load, stress and performance tests prior to deploying software
* Applies software engineering best practices and methods
* Experienced in agile methodologies and practices


Experience/Skills:

* 7+ years of design, development, and deployment of web sites & APIs using open source technologies such as Web Servers - nginx, apache, tomcat.
* Experience with programming languages - nodejs, Java, Linux OS, HTML, CSS, and Javascript frameworks such as reactjs, angularjs, jquery, etc.
* Experience in logging, monitoring & alerting of web apps
* Continuous Intergration and delivery of web apps
* Test Automation
* Managing performance metrics of the web apps
* 7+ years of resource management experience
* Excellent verbal & written skills to interact with peers, business partners & customers
          Technical Lead   
US citizens and Green Card Holders and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time”

Must Have

This a Lead role with experience in Tech/service Delivery
With 4-5 years of experience in Managing a team is must
Java/Linux, DB2, Application Support, MQ, Tibco,
Helped team with DevOps or Continuous Integration
Background in hands on software development

Duties:



* Participate in the review of requirements and design of new core system along with documentation in concert with business, technology and governance teams, create, revise and maintain the changes as required.
*
* Work across local and distributed development teams and plan system integration efforts of various developed components. Effective teamwork and communication skills are essential.
*
* Take full ownership and provide technical leadership of specific components, its function as well as its interaction with other components, with unit component testing - along with coordinating integration test with participating components and their teams - and manage issue resolutions and follow-through with fixes. Manage team of onshore and offshore developers responsible for the components.
*
* Work with infrastructure teams to prepare and deploy the solution, preparation of deployment instructions, as well as work with clients to validate and verify the application service function.
*
* Work with third party vendors and external connectivity’s and their gateways to prepare and test full-cycle data flows in test and production environments.
* Provide 3rd level support: The operational aspects of the role is limited to ensuring that our current services continue to function efficiently in the way they do today and the solutions encompass that need. This entails supporting our internal and external clients during the application development process plus L3 support for any application running in the UAT and Prod environments.



Skills/Qualifications:

Must Haves:

* Minimum of 8+ years of hands-on development and application tech-lead experience architecting, designing, developing, supporting, and owning applications in Core Java/Java EE - with preferable exposure to payment systems or trade processing systems. Undertake independently the major and critical coding tasks of a project.
*
* Solid experience with core server side java as well as GUI specifically GWT - but can consider if other strong browser GUI technologies experience. Provide solutions using design patterns, common techniques, and industry best practices that meet the typical challenges/requirements of a web based UI application including usability, performance, security, and compatibility.
*
* Experience designing and developing enterprise applications with J2EE/Java EE APIs, core Spring framework and other Spring framework abstractions for Web, Data Access, Integration, Security, etc., JPA/Hibernate or any other ORM technology, JMS and messaging systems such as IBM MQ, WebSphere.
*
* Good understanding of relational data models, SQL, and databases.
*
* Solid experience working on Linux based platforms with strong scripting knowledge
*
* Preferable development exposure in HTML, CSS, Java/GWT, some JavaScript
* Experience in stakeholders management for technology delivery and production support, SDLC processes under technology governance, technology/project risks management,
* Understanding of enterprise security concepts, policy-based authorization, SSO, defensive and secure coding practices, PKI, etc.
* Knowledge of various software development methodologies (SDLC - Scrum) and techniques (continuous integration, automated unit testing, etc.).
* Experience using software development tools GIT or SVN, JIRA, Eclipse, Bamboo, Jenkins, Confluence, Maven

Pluses:
Experience working with technology partners, offshore development and test teams.
Concurrent Java programming
GWT GUI or JavaScript based browser client technologies
Experience with software development tools Eclipse, GIT, Maven, Bamboo, JIRA, Confluence, Dockers
Unit test / test plan / test strategy / automated testing
Strong scripting either in Perl, Python or Shell
Exposure to DB2, IBM MQ, WebSphere.

Nice-To-Have:

Multi-language skills either in C++ or C#
Knowledge of various useful open source packages
          Lead Android Apps Developer   

Our client is growing in Seattle and Palo Alto! We believe in security, choice, and openness. We’re working to make your Android device truly yours again and we're building the products and services to make it happen. Our goal has always been to create the best possible mobile experience. Together, through company and community, open source and innovation, we will build something unique.

In the Android Partner Apps team, we are looking for developers who are excited at the possibilities of opening up deep integrations in our application and framework for our partners to leverage. These apps include Launcher, Dialer, Camera, Messaging, among many other entry points, and the goal is to build immersive experience that leverages technologies and features from other exciting companies straight in our native apps so that a user can get the full experience in one place. By working on our team, you will get to interface with teams from different companies, define APIs to add to our SDK, and ultimately build a better user experience by combining the advantages of our apps with our partners’ technologies.

As the leader of the Android Apps Team you will:

* Lead of a team of developers and be responsible for mentoring and giving direction to the team
* Be deeply involved in the hiring process for your team as well as help sister teams with interviewing candidates
* Work with product team to clarify product requirements and build out the UI flows and experiences with the design team
* Work with partners to define APIs, determine the split of responsibilities, and help drive the project to ship
* Lead and be responsible for the architectural design and APIs created in our SDK to enable partner integration experiences
* Implement complex new features and functionality both in apps as well as in the Android framework
* Get the chance to work on many different apps including Launcher, Dialer, Camera, Contacts, ?Gallery and many more ?

To be successful in the role, you should have:

* Experience in leading a team of Engineers
* Extensive experience developing complex Android applications?
* Have a strong understanding of the Android framework
* Exceptional OO design and development skills?
* An expert understanding of the Android SDK?
* Experience in costing and architecting large features based on requirements from Product
* Experience in prioritizing work items and weighing/understanding risks during different phases of a product cycle?
* Successfully shipped multiple projects?
* Be able to work well with others in a fast­ paced environment?
* Be open minded and excited to learn new things

About Our Client:
The our client OS is known for its revolutionary personalization features, intuitive interface, speed, improved battery life, and enhanced security. With a rapidly growing global user base and a vibrant community of developers, we’re connecting smartphone and tablet consumers to people, apps, and things they love.

Our client has backing from top tier, strategic investors, including Andreessen Horowitz, Benchmark, Redpoint Ventures, Premji Invest, Index Ventures, Twitter Ventures, Qualcomm Inc., and Tencent.


          GUI Craftsman - Hybrid Web Application Developer   

GUI Craftsman - Hybrid Web Application Developer

With a single glance, you have a habit of seducing users with your work. You are able to take a brush to canvas and paint visually beautiful experiences with a minimalist eye that is accessible to both the savvy and the masses. You compose with a generous brush stroke of reliability, responsiveness and security ultimately earning a user’s trust. With an immaculate attention to detail, you notice when a font is a point off. As a master of web development and web-UI, you also feel at home developing hybrid native/web apps. You excel when working in small, dynamic, and fast-paced teams and your self-driven attitude helps you thrive in unstructured environments. You are accomplished at what you do and long for the chance to work on a mix of bold innovations that push boundaries and change the way millions of people interact with technology. You live in a world where code is your paint, widgets your brushes, the screen your canvas and the world your gallery. You see the beauty in both perfect pixel alignment and in the performance of radix sort. You are an artist and a technologist. You are one of us.

About Us

We are a diverse group of entrepreneurially-minded engineers working on raising the bar for modern productivity and collaboration. The Innovation Studio we've built affords us the flexibility and excitement of an early-stage startup environment without the funding challenges or all-or-nothing risk that comes from a singular product focus. We’re a smart, social, and passionate team, looking for other fearless adventurers to join us on our journey.

Responsibilities

* Develop seductive and visually beautiful GUI applications for cross-platform environments utilizing a hybrid of both native and web app technologies
* Build solutions that leverage highly responsive, elegant, and user-friendly design
* Work with design team to create revolutionary experiences
* Instrument applications to improve responsiveness and understand usage behaviors
* Debug issues that arise around design, performance, and compatibility issues
* Help conduct usability tests
* Rapidly iterate new application capabilities based on results of performance and usability testing

Requirements

* BS in Computer Science or equivalent experience (e.g. 5+ years website development)
* Experience developing responsive and interactive desktop, mobile and web app GUI applications
* Iteratively prototyping design concepts and translating them into products
* Proficient in at least two modern programming languages and computer science fundamentals
* Strong foundation in CSS3, HTML5 and Javascript
* Experience with Photoshop or equivalent photo editing software
* Collaborate with design team to refine product experience
* Understand user centered design and possess user empathy
* Familiar with agile software development processes
* Self-driven and willing to take challenges head-on and achieve goals

Preferred Qualifications

* Masters in Computer Science or equivalent
* Experience developing graphical and charting applications
* Experience with mobile application development
* Experience with cross platform GUI toolkit, e.g. QT
* Experience preparing desktop application deployment packages
* Experience with distributed version control like Git, Bazaar, Mercurial
* Significant contributor to open source software
* Experience with AngularJS or similar frameworks (e.g. Ember, Backbone.js)
* Experience with web app technologies such as Sass, Grunt, jQuery, Foundation/Bootstrap


          DevOps Engineer   

Our client is growing! The time has come for your mobile device to truly be yours again and we're building the products and services to make it happen. We are looking for a detail oriented Senior DevOps Engineer with cloud computing, deployment automation, network, server, and multi-region deployment experience.

As a DevOps Engineer you will:

* work hand-in-hand with product owners and a world-class engineering team to ensure the quality, functionality, performance, security and availability for our products
* be part of a team that designs and maintains our automated build, release, and deployments
* make recommendations for improvements to existing architecture
* participate in multi-region deployments at a rapid iteration
* Work in an Agile development environment

To be successful in the role, you should have:

* 4+ years of hands-on technical operations and coding experience
* a strong understanding of the aspects of service health including processor, memory, and network utilization
* experience with monitoring systems like Sensu or Nagios
* the ability to handle multiple critical tasks and self-manage
* the ability to troubleshoot and identify solutions to production scalability problems
* impeccable organizational skills
* experience working with Android and EC2 will be beneficial
* experience with Python, Java, Bash, or Ruby/Chef
* a Computer Science or related degree would be a bonus

About Our Client:

We believe in security, choice and openness - and are working to make your Android device truly yours again. Our goal has always been to create the best possible mobile experience. Together, through company and community, open source and innovation, we will build something unique.

Our client is a leading mobile operating system pure-play, with offices in Palo Alto and Seattle. Our client is known for performance, enhanced security and privacy, and revolutionary personalization features. With a rapidly growing global user base and the largest community of Android developers, our client is re imagining mobile computing.

Our client has backing from top tier, strategic investors, including Andreessen Horowitz, Benchmark, Redpoint Ventures, Premji Invest, Index Ventures, Twitter Ventures, Qualcomm Inc., and Tencent.


          Software Developer - Data/Interoperability (HL7)   

Software Developer - Data/Interoperability (HL7)

Who you are

* You are passionate about software development
* No matter how long you’ve been doing this, you strive to improve as a software craftsman
* You are involved with the software development community and are always looking to the latest technologies to improve quality, design, and delivery times.
* You excel with soft skills and communication which makes you very effective in a highly collaborative, team and paired programming environment
* Bonus for participation in open source projects
* You love agile principles - continuous improvement, incremental and continuous delivery, lean style techniques such as Kanban, Test Driven Development, etc

What you will do

As a core team member of our data platform team you will build integration tools and data warehouses that collect healthcare records from many sources and APIs. You will build data warehousing technologies to provide data extracts and reporting to our customers as well as operable business intelligence to our internal stakeholders. You will develop a strong health care industry domain knowledge and an understanding of the needs of cancer patients and be a leader on the team for continuous improvement, code quality, and software craftsmanship. You will get to work with a range of these tools and technologies:

* Ruby
* SQL Relational Databases
* Data integration, ETL, Natural Language Processing
* Enterprise Service Bus and Messaging Systems
* Data Warehousing and Reporting Tools
* HL7, FHIR, healthcare interoperability standards
* RESTful API and web services
* Rspec/Cucumber
* Resque/Redis
* Git

What you will bring to us

* A strong understanding of object-oriented design fundamentals and best practices
* Strong SQL relational database knowledge and the ability to write and optimize complex queries
* Experience doing test-driven development
* Your experience debugging performance issues, tuning, and scaling large applications
* Your bias for action, and experience making things happen in a fast-paced, dynamic environment
* Your excitement for the mission of Cancer care and a strong desire to impact an up-and-coming health care technology start-up
* Knowledge of HL7, FHIR, or other healthcare interoperability standards
* Bonus for experience with enterprise application integration and messaging systems experience



**I am looking for someone with experience writing application code that consumes and generates HL7 messages for data transfer.**



**ONLY SERIOUS INQUIRIES PLEASE**



Resumes without relevant experience will be ignored



Please send resumes directly to chris.demmel@harveynashusa.com


          Sr Front End Engineer   

Software Engineer - Front-End
Seattle, WA

Be an integral part of our team, leading the front-end development for our cloud-based data processing and drone sensor platform. Build javascript applications to analyze and distribute information in a fast, scalable, and easy-to-use manner to global users. The ideal candidate will possess a wide variety of skills in web application design, will have a fine-tuned instinct for good design patterns, and will be an expert in designing and deploying single page applications (SPA's).
Responsibilities

* Lead front-end user interface development for our web applications using AngularJS, Backbone and hand coded JavaScript.
* Work closely with the team to define and develop our data management, processing, and presentation architectures.

Qualifications

* Must have at least a bachelor's degree in Computer Science
* Be able to write hand coded javascript in your sleep
* Have experience with test-driven development (TDD) methods to create well-architected and well-tested code
* Be comfortable working in a Linux-based development environments
* Must have experience with client-side JS application frameworks such as AngularJS and/or Backbone
* Possess a deep knowledge of HTML5 and CSS web standards
* Must have a portfolio of visually appealing and responsive user interface designs


Extra Points

* Development with JavaScript-based mapping applications (MapBox, Leaflet, OpenLayers, Google Maps)
* Experience with native mobile applications on Android and/or iOS
* Back-end development experience
* Open source software contributions
* Professional or personal Drone/UAV experience


          Comment on Where to get content for OpenSim by Kasumi Oanomochi   
I agree. Also many people don't seem to realize that the internet STARTED as open source for the intellectuals of the world. It was never meant as a commercial environment. (note don't believe the US military story about the internet's origins either. I was there and there was never any openly military individuals on it. It was a place for smart people to talk to other smart people. There was little to no "trolling" either as this was a practice brought in after the http and the "web" was introduced to the internet. Then commercial entities invaded and the intelligent open source communities had to go to the "dark web" to get away from the sickness of commercialism. I wish the old days would come back, but this would mean that the connection to the internet would not just be given out to any troglodyte with a keyboard.
          Web Application Developer - Yahara Software - Madison, WI   
MongoDB or other NoSQL databases. We have an exciting opening for a full-stack, open source Web Application Developer (full-time) to join our innovative...
From Yahara Software - Mon, 15 May 2017 15:31:43 GMT - View all Madison, WI jobs
          BEST PHP Training in Noida   
PHP (recursive acronym for PHP Hyper Preprocessor) is a widely-used open source general
          Free Download Manager 5.1.30.6509   
Free Download Manager Icon


Free Download Manager (FDM) is a powerful download accelerator and manager that allows you to download files and whole web sites from any remote server via HTTP, HTTPS, and FTP. Using Free Download Manager you can boost all your downloads up to 10 times, process media files of various popular formats, drag&drop URLs right from a web browser as well as simultaneously download multiple files. In addition, It offers advanced features and allows you to adjust traffic usage, organize downloads, control file priorities for torrents, efficiently download large files and resume broken downloads.

Free Download Manager is compatible with the most popular browsers and integrates into Google Chrome, Mozilla Firefox, Microsoft Edge, Internet Explorer and Safari.

Free Download Manager Key Features:
  • Light-weight and easy to use
  • User-friendly interface with modern design
  • Fast, safe and efficient downloading
  • Video downloading from popular websites
  • Support HTTP/HTTPS/FTP/BitTorrent
  • Resume broken downloads
  • Enhanced audio/video files support
  • Smart file management and powerful scheduler
  • Adjusting traffic usage
  • Upload Manager


Download Free Download Manager

Download for Windows x32

Download for Windows x64

Download for Mac OS X
Download Free Download Manager 3.9.7

Download for Windows x32

Last Update: July 01, 2017

Current Version: 5.1.30.6509

License: Open Source

Languages:
English, Spanish, German, French, Portuguese, Romanian, Polish, Dutch, Swedish, Italian, Danish

Supported Operating Systems:
Windows 7 / 8 / 8.1 / 10 (32-Bit, 64-Bit)
Mac OS X 10.9 or later

Developer: Free Download Manager

Homepage: FreeDownloadManager.org

  • Bug fixes: HTTP downloads (login info), UI, moving downloads, context menu, etc.
  • Downgraded some third-party libraries
  • Allow moving downloads even if they're running
  • Added translations

          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          Senior Application Engineer - .NET, HTML, Visual Studio   
Tempe, If you are a Senior Application Engineer with experience, please read on! What You Will Be Doing - Troubleshoot issues in our technology applications and infrastructure across Linux, Hadoop, Java, MySQL, messaging, PHP, NGINX, node.js and other open source technologies. - Proactively perform detailed analysis to identify areas of improvement across the platforms - Use your experience to plan and e
          Senior Java Developer - Java, .NET, HTML, Visual Studio   
Gilbert, If you are a Senior Application Engineer with experience, please read on! What You Will Be Doing - Troubleshoot issues in our technology applications and infrastructure across Linux, Hadoop, Java, MySQL, messaging, PHP, NGINX, node.js and other open source technologies. - Proactively perform detailed analysis to identify areas of improvement across the platforms - Use your experience to plan and e
          大明國小營養午餐網 - 用戶管理   
XOOPS is a dynamic Object Oriented based open source portal script written in PHP.
          zuniga написал(а) в теме: Что нужно поправить в скрипте?   
Добрый день, есть скрипт формы обратной связи с прикреплением аттачей. Пол года назад тестировал его на бесплатном хостинге, все отлично работало. Сегодня опять его загрузил, но помимо формы вылезло много ошибок, они видны на скриншоте расположенном ниже . Загружал форму на другие хостинги, отображается отлично, но письма не приходят с нее, видимо отключена функция "зенд майл". Скажите можно ли исправить форму чтобы исчезли все ошибки? Просто это единственный бесплатный хостинг где работает функция "зенд майл", а платный хостинг ради одной формы брать не хочется. Вот скриншот ошибок

user posted image

А это код двух php файлов:

Основной phMailer.php

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<
html xmlns="http://www.w3.org/1999/xhtml">

<
head>
<
meta http-equiv="Content-Language" content="ru" />
<
meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<
title>Написать мне письмо</title>
<
style type="text/css">
.
style1 {
text-align: center;
}
</style>
</
head>

<
body>

<
table style="width: 800px" cellspacing="1" align="center">
<
tr>
<
td class="style1">Написать мне письмо<br />
<
br />
<
br />
<?php
/*
//================================================================================
* phphq.Net Custom PHP Scripts *
//================================================================================
:- Script Name: phMailer
:- Version: 1.5.1
:- Release Date: Jan 27th 2004
:- Last Update: Jan 25 2010
:- Author: Scott Lucht
<scott@phphq.net> http://www.phphq.net
:- Copyright© 2010 All Rights Reserved
:-
:- This script is free software; you can redistribute it and/or modify
:- it under the terms of the GNU General Public License as published by
:- the Free Software Foundation; either version 2 of the License, or
:- (at your option) any later version.
:-
:- This script is distributed in the hope that it will be useful,
:- but WITHOUT ANY WARRANTY; without even the implied warranty of
:- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
:- GNU General Public License for more details.
:-
http://www.gnu.org/licenses/gpl.txt
:-
//================================================================================
* Description
//================================================================================
:- phMailer is a very simple PHP mail script that supports attachments. This is very helpful if you want your
:- visitors to be able to contact you without them knowing your real email address. On great feature of this
:- script is the ability to allow users to attach multiple files when sending an email directly from your site. Of
:- course, you can disable this feature if you wish. Any file type is accepted as long as they are included in your
:- file extension list. Another popular use for a PHP email form is protection against spam bots. Spam, is a major
:- downside of email, and placing your email publicly on your website is asking for spam. Spam bots can take your
:- email address right off your site and add it to thousands of spam databases, resulting in a never ending supply
:- of spam for you. I coded this script, because I couldn't find a simple mail script that would allow my visitors to
:- send me attachments while keeping my email hidden from spam bots.
//================================================================================
* Setup
//================================================================================
:- To setup this script, simply upload this file to your website. Then edit the variables found herein to adjust
:- how the form works.
//================================================================================
* Change log
//================================================================================
:- Version 1.0
:- 1) Initial Release
:- Version 1.1
:- 1) Minor
bug fixes / html improvement
:- Version 1.2
:- 1) Added CSS styling
:- 2) Cleaned html and improved form style
:- 3) Removed html embedded directly in php tags
:- 4) Improved security checks to prevent forging email headers
:- Version 1.3
:- 1) Cleaned up html and CSS styles
:- 2) Added support to attach multiple files
:- 3) Minor
bug fixes
:- Version 1.4
:- 1) Re-write of many core functions to improve attachment handling
:- 2) Added feature that allows users to select an email address from a drop down
:- 3) Minor
bug fixes
:- Version 1.5
:- 1) Added multiple new security checks to prevent email header forging.
:- 2) Cleaned up script and reduced PHP needed to complete tasks.
:- 3) Minor
bug fixes
:- Version 1.5.1
:- 1) Cleaned up script and reduced PHP needed to complete tasks.
:- 2) Added text/html as email type to allow users to use line breaks when sending a message. Messages
:- now display correctly in newer email clients such as Gmail.
:- 3) Removed unnecessary email headers and improved standardization
:- 4) Made sure script is completely compatible with PHP 5.3.x
//================================================================================
* Frequently Asked Questions
//================================================================================
:- Q1: I never receive any mail, but people say they have emailed me through the form.
:- 1) Try the mailtest.php file that came with this script. If that fails, then mail() is probably not setup right.
:- 2) Double check to make sure your email address is correct.
:- 3) Try using the form with $allowattach set to 0. It could be your mail server rejecting the mail
:- because of attachments.
:- 4) If you are on windows, make sure your SMTP is set to your mail server. If you are on Linux, make sure
:- your sendmail path if correct. Again, ask your host if you are unsure about this.

:- Q2: I never receive any attachments.
:- 1) Maybe your server has some security against uploading files or sending attachments through mail,
:- check with your host on this issue. This script does send attachments, it's been tested many
:- times on many different platforms and versions of PHP with safe mode on and off.
:- 2) Maybe the files people are submitting are too big. Check php.ini for the post_max_size,
:- upload_max_filesize, file_uploads, max_execution_time you may have to check with your host on this.
:-
:- Q3: The page takes long to load and then gives me a page cannot be displayed or a blank page.
:- 1) This is usually due to a low value in php.ini for "max_execution_time".
:- 2) A newer ini setting "max_file_uploads" in php 5.2.12 was added which may be limiting the number
of simultaneous uploads.
:- 3) Your "upload_max_filesize" and "post_max_size" in php.ini might be set to low.
:-
:- Q4: How do I edit the colors of the form?
:- 1) You will need to edit the CSS near the bottom of the script to change the looks and colors of the form.
:- Check
http://www.w3schools.com/css/default.asp for more information on CSS.
:-
:- Q5: Can I add more fields for the users to enter information in?
:- 1) That's the beauty of PHP! It's open source, you can edit it all you want, change whatever you don't like.
:- Just please leave in my copyright. So many times I see my script without it and it makes me sad.
:-
:- Q6: Dude! Can you add more fields for me? I don't know PHP!
:- 1) Maybe, but I do usually charge a fee depending on what you want done. Don't freak out! It's usually
:- a very small one. I can't do everything for free..
:-
:- Q7: Can I remove your copyright link?
:- 1) I can't physically stop you. However, I really appreciate it when people leave it intact.
:- Some people donate , , to take it off.
:-
:- Q8: You never respond to my emails or to my questions in your forums!
:- 1) I'm a very busy guy. I'm out of town a lot, and at any given time I have several projects going on.
:- I get a lot of emails about this script, not to mention my other ones.
:- 2) I only understand English. If your English is very bad please write in your native language and then
:- translate it to English using
<http://babelfish.altavista.com/babelfish/tr>.
:- 3) If you are going to contact me, describe the issue you are having as completely as possible.
:- "dude me form don't work see it at blah.com what's wrong??!?!" will get no response, ever. Write
:- in detail what the problem is. Spend a minute on it, and maybe I'll take some of my time to reply.
/*
//================================================================================
* ! ATTENTION !
//================================================================================
:- Please read the above FAQ before emailing me/
*/

// This will show in the browsers title bar and at the top of the form.

$websitename="Отправка письма";

// Allowed file types. Please remember to keep the format of this array, add the file extensions you want
// WITHOUT the dot. Please also be aware that certain file types (such as exe) may contain malware.

$allowtypes=array("zip", "rar", "txt", "doc", "jpg", "png", "gif", "odt", "xml");

// What's your email address? Seperate email addresses with commas for multiple email addresses.
$myemail="myemail@yandex.ru";

// What priority should the script send the mail? 1 (Highest), 2 (High), 3 (Normal), 4 (Low), 5 (Lowest).
$priority="3";

// Should we allow visitors to attach files? How Many? 0 = Do not allow attachments,
// 1 = allow only 1 file to be attached, 2 = allow two files etc.

$allowattach="1";

// Maximum file size for attachments in KB NOT Bytes for simplicity. MAKE SURE your php.ini can handel it,
// post_max_size, upload_max_filesize, file_uploads, max_execution_time!
// 2048kb = 2MB, 1024kb = 1MB, 512kb = 1/2MB etc..

$max_file_size="1024";

// Maximum file size for all attachments combined in KB. MAKE SURE your php.ini can handel it,
// post_max_size, upload_max_filesize, file_uploads, max_execution_time!
// 2048kb = 2MB, 1024kb = 1MB, 512kb = 1/2MB etc..

$max_file_total="2048";

// Value for the Submit Button
$submitvalue=" Отправить ";

// Value for the Reset Button
$resetvalue=" Очистить ";

// Default subject? This will be sent if the user does not type in a subject
$defaultsubject="No Subject";

// Because many requested it, this feature will add a drop down box for the user to select a array of
// subjects that you specify below.
// True = Use this feature, False = do not use this feature

$use_subject_drop=false;

// This is an array of the email subjects the user can pick from. Make sure you keep the format of
// this array or you will get errors.
// Look at <http://novahq.net/forum/showthread.php?t=38718> for examples on how to use this feature.

$subjects=array("Department 1", "Department 2", "Department 3");

// This is an array of the email addresses for the array above. There must be an email FOR EACH
// array value specified above. You can have only 1 department if you want.
// YOU MUST HAVE THE SAME AMMOUNT OF $subjects and $emails or this WILL NOT work correctly!
// The emails also must be in order for what you specify above!
// Seperate email addresses by a comma to send an email to multiple addresses.

$emails=array("dept_1@domain.com", "dept_2@domain.com", "dept_3@domain.com");

// This is the message that is sent after the email has been sent. You can use html here.
// If you want to redirect users to another page on your website use this:
// <script type=\"text/javascript\">window.location=\"http://www.YOUR_URL.com/page.html\";</script>

$thanksmessage="Ваше письмо отправлено! В ближайшее время я отвечу.";

/*
//================================================================================
* ! ATTENTION !
//================================================================================
: Don't edit below this line.
*/

// Function to get the extension of the uploaded file.

function get_ext($key) {
$key=strtolower(substr(strrchr($key, "."), 1));
$key=str_replace("jpeg", "jpg", $key);
return $key;
}

// Function used to attach files to the message
function phattach($file, $name, $boundary) {

$fp=fopen($file, "r");
$str=fread($fp, filesize($file));
$str=chunk_split(base64_encode($str));
$message="--".$boundary."\n";
$message.="Content-Type: application/octet-stream; name=\"".$name."\"\n";
$message.="Content-disposition: attachment; filename=\"".$name."\"\n";
$message.="Content-Transfer-Encoding: base64\n";
$message.="\n";
$message.="$str\n";
$message.="\n";

return $message;
}

//Little bit of security from people forging headers. People are mean sometimes :(
function clean_msg($key) {
$key=str_replace("\r", "", $key);
$key=str_replace("\n", "", $key);
$find=array(
"/bcc\:/i",
"/Content\-Type\:/i",
"/Mime\-Type\:/i",
"/cc\:/i",
"/to\:/i"
);
$key=preg_replace($find, "", $key);
return $key;
}

// Initilize some variables
$error="";
$sent_mail=false;

// When the form is submitted
If($_POST['submit']==true) {
extract($_POST, EXTR_SKIP);

// Check the form for errors
If(trim($yourname)=="") {
$error.="You did not enter your name!<br />";
}

If(trim($youremail)=="") {
$error.="You did not enter your email!<br />";
} Elseif(!preg_match("/^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,})+$/", $youremail)) {
$error.="Invalid email address.<br />";
}

If(trim($emailsubject)=="") {
$emailsubject=$defaultsubject;
}

If(trim($yourmessage)=="") {
$error.="You did not enter a message!<br />";
}

// Verify Attchment info
If($allowattach > 0) {

// Get the total size of all uploaded files
If((array_sum($_FILES['attachment']['size'])) > ($max_file_total*1024)) {

$error.="The max size allowed for all your files is ".$max_file_total."kb<br />";

} Else {

//Loop through each of the files
For($i=0; $i <= $allowattach-1; $i++) {

If($_FILES['attachment']['name'][$i]) {

//Check if the file type uploaded is a valid file type.
If(!in_array(get_ext($_FILES['attachment']['name'][$i]), $allowtypes)) {

$error.= "Invalid file type for your file: ".$_FILES['attachment']['name'][$i]."<br />";

//Check the size of each file
} Elseif(($_FILES['attachment']['size'][$i]) > ($max_file_size*1024)) {

$error.= "Your file: ".$_FILES['attachment']['name'][$i]." is to big.<br />";

} // If in_array

} // If Files

} // For

} // Else array_sum($_FILES['attachment']['size'])

} // If Allowattach

If($error) {

$display_message=$error;

} Else {

If($use_subject_drop AND is_array($subjects) AND is_array($emails)) {
$subject_count=count($subjects);
$email_count=count($emails);

If($subject_count==$email_count) {

$myemail=$emails[$emailsubject];
$emailsubject=$subjects[$emailsubject];

} // If $subject_count

} // If $use_subject_drop

$boundary=md5(uniqid(time()));

//Headers
$headers="Return-Path: <".clean_msg($youremail).">\n";
$headers.="From: ".clean_msg($yourname)." <".clean_msg($youremail).">\n";
$headers.="X-Mailer: PHP/".phpversion()."\n";
$headers.="X-Sender: ".$_SERVER['REMOTE_ADDR']."\n";
$headers.="X-Priority: ".$priority."\n";
$headers.="MIME-Version: 1.0\n";
$headers.="Content-Type: multipart/mixed; boundary=\"".$boundary."\"\n";
$headers.="This is a multi-part message in MIME format.\n";

//Message
$message = "--".$boundary."\n";
$message.="Content-Type: text/html; charset=\"iso-8859-1\"\n";
$message.="Content-Transfer-Encoding: quoted-printable\n";
$message.="\n";
$message.=clean_msg(nl2br(strip_tags($yourmessage)));
$message.="\n";

//Add attachments to message
If($allowattach > 0) {

For($i=0; $i <= $allowattach-1; $i++) {

If($_FILES['attachment']['tmp_name'][$i]) {

$message.=phattach($_FILES['attachment']['tmp_name'][$i], $_FILES['attachment']['name'][$i], $boundary);

} //If $_FILES['attachment']['name'][$i]

} //For

} // If

// End the message

$message.="--".$boundary."--\n";

// Send the completed message
If(!mail($myemail, clean_msg($emailsubject), $message, $headers)) {

Exit("An error has occured, please report this to the website administrator.\n");

} Else {

$sent_mail=true;

}

}
// Else

} // $_POST

/*
//================================================================================
* Start the form layout
//================================================================================
:- Use the html below to customize the form.
*/

?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<
html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<
head>
<
meta http-equiv="Content-Language" content="en-us" />
<
meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<
title><?php echo $websitename; ?> - Powered By phMailer</title>

<
style type="text/css">
body{
background-color:#FFFFFF;
font-family: Verdana, Arial, sans-serif;
font-size: 12pt;
color: #000000;
}

.error_message{
font-family: Verdana, Arial, sans-serif;
font-size: 11pt;
color: #FF0000;
}

.thanks_message{
font-family: Verdana, Arial, sans-serif;
font-size: 11pt;
color: #000000;
}

a:link{
text-decoration:none;
color: #000000;
}
a:visited{
text-decoration:none;
color: #000000;
}
a:hover{
text-decoration:none;
color: #000000;
}

.table {
border-collapse:collapse;
border:1px solid #000000;
width:500px;
}

.table_header{
border:1px solid #070707;
background-color:#C03738;
font-family: Verdana, Arial, sans-serif;
font-size: 11pt;
font-weight:bold;
color: #FFFFFF;
text-align:center;
padding:2px;
}

.attach_info{
border:1px solid #070707;
background-color:#EBEBEB;
font-family: Verdana, Arial, sans-serif;
font-size: 8pt;
color: #000000;
padding:4px;
}


.table_body{
border:1px solid #070707;
background-color:#EBEBEB;
font-family: Verdana, Arial, sans-serif;
font-size: 10pt;
color: #000000;
padding:2px;
}

.table_footer{
border:1px solid #070707;
background-color:#C03738;
text-align:center;
padding:2px;
}

input,select,textarea {
font-family: Verdana, Arial, sans-serif;
font-size: 10pt;
color: #000000;
background-color:#AFAEAE;
border:1px solid #000000;
}

.copyright {
border:0px;
font-family: Verdana, Arial, sans-serif;
font-size: 9pt;
color: #000000;
text-align:right;
}

form{
padding:0px;
margin:0px;
}
</style>

<
script type="text/javascript">
var error="";
e_regex = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,})+$/;

function Checkit(theform) {
if(theform.yourname.value=="") {
error+="Заполните поле Твое имя\n";
}

if(theform.youremail.value=="") {
error+="Заполните поле Твой email\n";
} else if(!e_regex.test(theform.youremail.value)) {
error+="Invalid email address\n";
}

if(theform.yourmessage.value=="") {
error+="Заполните поле Сообщение\n";
}

if(error) {
alert('**Ошибки:**\n\n' + error);
error="";
return false;
} else {
return true;
}
}

</script>

</
head>
<
body>
<?If($display_message) {?>

<div align="center" class="error_message"><b><?=$display_message;?></b></div>
<
br />

<?}?>

<?
If($sent_mail!=true) {?>

<form method="post" action="<?=$_SERVER['PHP_SELF'];?>" enctype="multipart/form-data" name="phmailer" onsubmit="return Checkit(this);">
<
table align="center" class="table">
<
tr>
          Content Specialist - OpenText ECM - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages. Accenture is a leading global professional services company, providing a...
From Accenture - Tue, 27 Jun 2017 02:50:27 GMT - View all Canada jobs
          Digital Technology Developer Sr Manager - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages Experience with project automation technology:....
From Accenture - Wed, 12 Apr 2017 10:04:31 GMT - View all Canada jobs
          WCM Senior Developer - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages. Experience working with relevant WCM or eCommerce packaged solutions such...
From Accenture - Fri, 07 Apr 2017 03:45:10 GMT - View all Canada jobs
          BEST PHP Training in Noida   
PHP (recursive acronym for PHP Hyper Preprocessor) is a widely-used open source general
          CopyQ 3.0.3   
CopyQ is a clipboard manager with searchable and editable history plus support for image formats, command line control and more. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 10.6 MB ]
          Security: TIOCSTI, OutlawCountry, Jeep, and Older News Catchup   
  • On the Insecurity of TIOCSTI
  • OutlawCountry: CIA’s Hacking Tool For Linux Computers Revealed
  • Feds: Mexican motorcycle club used stolen key data to fuel massive Jeep heist

     

    Once inside, the thieves connected a "handheld vehicle program computer" into the Jeep's diagnostic port. Then, using the second key, the microchip on the duplicate key would be programmed, or "paired." With that complete, the alarm would cease, and the rear lights would stop flashing. Finally, the thieves would drive the Jeep into Mexico.

  • [Old] How Big Fuzzing helps find holes in open source projects

    Is “fuzzing” software to find security vulnerabilities using huge robot clusters an idea whose time has come?

    The latest numbers to emerge from Google’s OSS-Fuzz, a beta launched last December to automatically search for flaws in open source software, look encouraging.

  • [Old] Google's Fuzz Tester IDs Hundreds of Potential Open Source Security Flaws [Ed: This site is connected to Microsoft and cites Black Duck to make FOSS look bad.]

    Also, Black Duck Software Inc. recently revealed the results of security audits it undertook that show "widespread weakness in addressing open source security vulnerability risks."

  • [Old] Buy vs. build to reduce insider threats [Ed: False dichotomy. You do not ever BUY proprietary software, you license or rent. And FOSS is commercial. This site is connected to Microsoft.]

    There is no arguing that cybersecurity is a huge concern for the public, industry and government alike. The general consensus is that we need to be doing more, but we also need to be doing something different.

    The federal government and its agencies spend a lot of money on cybersecurity. The 2017 federal fiscal budget for information security was $19 billion. In recent years, a single cybersecurity contract has cost up to $1 billion. These contracts are largely awarded to federal contractors so that they can build custom solutions for agencies. And there is no lack of research pointing to the fact that the government pays contractors far more than it pays its own employees. All of this spending on cybersecurity could actually be weakening the government’s security posture.

    [...]

    Commercially supported open source has one other feature the contractor-implemented open source doesn't -- economies of scale. Because the majority of financial support for commercially supported software comes from the private sector and not the government, cost savings over the lifetime of a supported feature are massive. Though the government may be the first to request or introduce a software feature, when it's commercially supported those private sector companies co-fund the software O&M. Whenever a major bank adopts the same software the government uses, they both benefit from those advances. But government is one funding contributor among many, saving taxpayers a great deal of money.

  • [Old] #Infosec17 Dangers and Dependencies of Open Source Modules Detailed

    A common attack was by making a spelling mistake, as this can allow you to take over a legitimate account based on the module identity name. “The developers are here to develop and don’t always consider security,” he said.


          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          (USA-WA-Seattle) Software Development Engineer – In-Memory Distributed Systems   
Our software developers build the next generation technologies that change how millions of AWS customers connect, and interact with AWS services ecosystem. We use ideas from every facet of computer science including distributed computing, large-scale design, big and real-time data processing, data storage, service oriented architecture, networking, machine learning, and artificial intelligence. We are looking for highly-motivated and passionate engineers to build our next generation high performance in-memory distributed data storage platform to solve real-time query, transaction and analytics processing for large scale data applications. If you have ever pondered about CAP theorem, consistent hashing, multi-master replication, merkle trees, leader election or Paxos Algorithm, gossip protocols, tiered storage, this is an opportunity to get your hands dirty with a real-world solution implementing these distributed system concepts. Come work with the folks who are not only building a highly-available and scalable in-memory distributed service but also influencing the direction of No SQL systems throughout the industry (read our acclaimed Dynamo paper here: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). As an engineer in our in-memory computing platform team, you will build our next-generation in-memory NoSQL database platform that allows developers to build highly available, scalable and high performance applications. We are working to bring some of the assets of RDBMS systems such as SQL and transactions to the rapidly growing world of NoSQL database systems. The software services have unprecedented scale and availability requirements. You will lead the software development of large-scale distributed in-memory storage platform; in Java, C/C++ and other languages using open source technologies like Redis, Memcached, and Amazon proprietary technologies. This includes software applications dealing with HTTP/REST services, asynchronous messaging, event-based technologies, real-time failure detection system, horizontal and vertical scaling, management and monitoring plane workflows, auto-remediation, fault tolerance, backup and restore technologies, disaster recovery and prevention. As a member of the In-Memory Storage Platform team, you will also get to work with exceptional team members and be directly involved in growing and mentoring junior engineers on the team. We are building a high performance, low-latency database where caching and data storage are managed by the single system to support the realm of real-time applications like IoT or mobile apps. We are extending our service from just being an in-memory data store cache, but also provide durable data storage without compromising latency. In addition, we are building a new highly scalable and available management plane system using micro-services architecture and a real-time failure detection and auto-remediation system that can detect node failures in our large distributed cluster, initiate and remediate failed nodes within seconds. Our charter is ElastiCache, Elasticache is an AWS service that enables users to deploy, manage and massively scale in-memory distributed data stores. Customers include many of the world's fastest growing start-ups, using the service to build low latency, high throughput data layer and improve performance of applications using caching. Amazon ElastiCache helps developers turbo-charge their application performance and simplifies management of Memcached and Redis data stores in the cloud. We heavily use open-source software systems in providing a world-class experience to our customers. To apply for this role, we are looking for folks with solid analytical, design and problem diagnosis skills, expertise with systems programming, database internals, high-performance applications, distributed systems or service design is a plus. We need our engineers to be versatile, display leadership qualities and be enthusiastic to tackle new problems across the full-stack as we continue to push technology forward. With your technical expertise you will manage individual projects priorities, deadlines and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. + Expert knowledge of one of the following programming languages: Java, C and C++ + 7+ years of hands on experience in software development, including design, implementation, debugging, and support, building scalable system software and/or Services + Deep understanding of distributed systems and web services technology + Strong at applying data structures, algorithms, and object oriented design, to solve challenging problems + Experience working with REST and RPC service patterns and other client/server interaction models + Track record of building and delivering mission critical, 24x7 production software systems + Bachelor’s degree in Computer Science or equivalent + Experience in taking a lead role developing complex software systems that have successfully been delivered to customers + Knowledge of professional software engineering practices & best practices for full software development life cycle, including coding standards, code reviews, source control management, build processes, testing and operations + Demonstrated ability to mentor other software developers in all aspects of their engineering skillsets + Experience in communicating with users, other technical teams, and senior management to collect requirements, describe software product features, product strategy and influence outcomes in technical decision-making + Experience working with in-memory caching and database technologies, including Memcached and Redis + Solid understanding of performance and efficiency with a strong customer focus + Master's degree in Computer Science or equivalent AMZR Req ID: 553974 External Company URL: www.amazon.com
          (USA-WA-Seattle) Amazon Aurora Distributed Storage - AWS Senior Software Development Engineer   
Are you interested in building hyper-scale database services in the cloud? Do you want to revolutionize the way people manage vast volumes of data in the cloud? Do you want to have direct and immediate impact on hundreds of thousands of users who use AWS database services? Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to five times better performance than MySQL at a price point one tenth that of a commercial database while delivering similar performance and availability. Amazon Aurora Storage team is looking for experienced technical experts in large scale storage system technologies to build our distributed storage that run across thousands of servers across multiple datacenters worldwide. These are core systems development positions where you will own the design and development of significant system software components critical to our industry leading storage services architected for the cloud. This is a hands on position where you will be asked to do everything from building rock-solid components to mentoring other engineers. You need to not only be a top software developer with a good track record of delivering, but also excel in communication, leadership and customer focus. This is a unique and rare opportunity to get in on the ground floor within a fast growing business and help shape the technology, product and the business. A successful candidate will bring deep technical and software expertise and ability to work within a fast moving, startup environment in a large company to deliver solid code that has a broad business impact. Come, join us in reinventing database systems for the cloud! + 10+ year's overall development experience and 7+ year's enterprise software experience. + Bachelor's Degree in Computer Science or equivalent - Masters in CS preferred. + Advanced coding skills in C or C++, preferably on a Linux/Unix platform. + Multi-threaded programming. + Knowledge of data structures, algorithms, operating systems, and/or distributed systems. + Storage Technology and optimization (intimate knowledge of storage hardware a plus). + OS internals. + Distributed systems and messaging. + Low level performance and tuning. + Excellent leadership, verbal and written communication skills. + Ability to work well with people and be both highly motivated and motivating. *LI-JF1 aws-sdesdm-na AMZR Req ID: 545948 External Company URL: www.amazon.com
          (USA-WA) Senior Engineer, Game Ecosystem   
**General Description** Working as a Senior Engineer, your main responsibility is to facilitate development of games and other heavily graphical applications on Samsung’s Android-based mobile products. The Ecosystem team exists to ensure that Samsung’s software and hardware systems are easily accessible to developers and offer a world-class experience in terms of performance and ease of development. The role of Ecosystem Engineer within the Ecosystem group includes taking senior technical responsibility for the tasks allocated and providing a competent level of technical authority in one or more technical skill areas within the group. An important part of the job will be building relationships with external parties such as game developers and related companies. You must be comfortable with presenting technical data and advice to developers and will be expected to collect and prioritise their issues and concerns so that Samsung can provide advice and solutions. You will be involved in incubating new projects, which may involve feasibility, design and prototyping work across the complete range of mobile software platforms and applications used by Samsung, its suppliers and customers. You will be responsible for performing all, or part, of the software development cycle (from Analysis, Specification, Designing, Documentation, Implementation, Verification and Commercialization Support) for a given area of software development (Android). A particular focus will be on optimisation and improvement of the software stack to support high-performance graphics applications. As a Senior Software Engineer you will be expected to develop and maintain a wide understanding of all Linux based platforms including the graphics components that can be used in Mobile devices and to provide high level technical input to architectural designs within the Core Graphics group and other groups within Samsung. The main tasks you are expected to perform include, but are not limited to: * Engage directly with third party developers, Samsung internal customers, suppliers and key Open Source Software projects to facilitate effective development of games and other demanding applications on Samsung mobile products. * Develop, implement or improve areas of functionality and technology according to the requirements of Samsung’s Core Graphics projects for mobile devices. * Provide advice to line and project managers regarding industry trends, also input for project planning and budgeting. * Carry out with minimal technical supervision detailed interpretation of architectural documentation, project requirements and technical marketing information. * Have technical responsibility for one or more significant sections of the assigned project and carry out, with minimal supervision, the assigned work. * Support specific areas of functionality in developments both at SRUK and other locations in Samsung on in collaboration with external partners. * Keep abreast of developments with all Samsung Mobile SW platforms, understand their architectures and how to design and develop new features and applications for them. * Help to introduce new and innovative technology to Samsung’s products * Assist the other team members with their work, technically supporting some junior software engineers within the team * Produce high quality deliverables (code and written reports) to SRUK and Samsung Corporate standards where required. * Provide written reports, following the attendance of meetings and resulting from other activities undertaken as appropriate. * Work as a member of a team, encouraging team building, motivation and cultivate effective team relations. * Support the Team Manager in identifying and training and development needs. * Support the Team Manager in continuous development of methods and processes. All work is to be of a professional standard, paying due regard to safety, efficiency, cost effectiveness, time scales and the needs of the Company. **Necessary Skills / Attributes** * Engage directly with third party developers, Samsung internal customers, suppliers and key Open Source Software projects to facilitate effective development of games and other demanding applications on Samsung mobile products. * Develop, implement or improve areas of functionality and technology according to the requirements of Samsung’s Core Graphics projects for mobile devices. * Provide advice to line and project managers regarding industry trends, also input for project planning and people up to Director level. * The desire and ability to work within a team structure and to be able to mentor junior engineers. * A high degree of self-motivation and the ability to work alone, managing own work and setting sensible priorities according to requirements. * Good analytical and logical thinking capability * Ability to learn and implement SRUK and Corporate business philosophies. **Company Information** Samsung Global Samsung Electronics Co., Ltd. is the global leader in consumer electronics and the core components that go into them. Through relentless innovation and discovery, we are transforming the worlds of televisions, smartphones, personal computers, printers, cameras, home appliances, medical devices, semiconductors and LED solutions. We employ 206,000 people across 72 countries with annual sales exceeding US $143.1 billion. Our goal is opening new possibilities for people everywhere. Samsung Europe Samsung Europe comprises 17 divisions (subsidiaries) across Europe that represent circa $32 bn. in sales. It has recently become the leading Consumer Electronics brand in the region in terms of recognition and most preferred by consumers. However, the ambition on the business is to become THE leading Electronics brand and to double its turnover by 2020. In the pursuit of global excellence, we are continuously looking for dynamic new leaders for the digital age of the 21st Century. Imagine a career working for a company who is passionate about its people. It is our people that make Samsung the leader in diverse marketplaces and the market innovator that drives technology. At Samsung Electronics, our products, our people and our approach to business are held to only the highest standards so that we can effectively contribute to a better world. *Category:* System Engineering *Full-Time/Part-Time:* Regular Full-Time *Location:* , Staines-Upon-Thames United Kingdom, Bellevue Washington, Staines-Upon-Thames United Kingdom, Mountain View California
          Hiring for Principal Engineer - Java in Bengaluru/Bangalore, for Exp. 8 - 13 yrs at PSG Consultants. (Job in Kolkata)   
Job Description:Highly skilled back-end engineer using Object-Oriented programming preferably Java, exposure to open source libraries and frameworks. * Strong knowledge in REST based programming using RestLet framework. REST, SOAP, HADOOP, SPARK, NoSQL t...
          C Con React.js Developer Posibilidad de TRABAJO REMOTO zona BELGRANO CABA URGENTE   
Argentina - KaizenRH se encuentra en búsqueda de C Con React.js Developer para trabajar en modernas oficinas de Importante Empresa en Belgrano dedicada... al desarrollo de Software a medida en tecnologías Open Source para empresas corporativas de Estados Unidos. Requisitos C developer Con React.js...
          C Con React.js Developer Posibilidad de TRABAJO REMOTO zona BELGRANO CABA URGENTE   
Argentina - KaizenRH se encuentra en búsqueda de C Con React.js Developer para trabajar en modernas oficinas de Importante Empresa en Belgrano dedicada... al desarrollo de Software a medida en tecnologías Open Source para empresas corporativas de Estados Unidos. Requisitos C developer Con React.js...
          How To Use Free Software To Build Your Own Professional Websites - 3 Simple Steps   
It's easier than ever to learn how to build your own website using free website building software. The real secret is that you may already have access to a robust software and not even know it! Here's a 3 simple step process to making your own websites with free open source software.
          How to Make Your Own Website For Free - Use Free Open Source Software to Create Your Own Website!   
Learning how to make your own website for free is a pretty simple process. The secret? You may already have access to the software! Read the simple steps to get started in a few minutes...
          Make My Own Website - The Popular Free Software You May Already Have   
Learning to make my own website with HTML took years of study in college and an expensive computer program, which of course upgrades every year... for more money. Well, thanks to open source software, times are changing and it is getting easier for the normal person to put up a website in just a few minutes.
          Product Manager, JBoss Data Grid - Red Hat, Inc. - Westford, MA   
At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions.
From Red Hat, Inc. - Tue, 06 Jun 2017 18:29:19 GMT - View all Westford, MA jobs
          Product Manager, JBoss Data Grid - Red Hat, Inc. - Westford, MA   
At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions.
From Red Hat, Inc. - Tue, 06 Jun 2017 18:29:19 GMT - View all Westford, MA jobs
          IT Developer/Architect - International Software systems - Maryland City, MD   
Proficiencies in DevOps*. This individual must be well versed in DevOps using industry standards and open source resources.*....
From Indeed - Thu, 29 Jun 2017 18:32:30 GMT - View all Maryland City, MD jobs
          Software Development Engineer – Big Data, AWS Elastic MapReduce (EMR) - Amazon Corporate LLC - Palo Alto, CA   
You will have a chance to work with the open source community and contribute significant portions its software to open source projects possibly including Hadoop...
From Amazon.com - Sat, 17 Jun 2017 08:32:08 GMT - View all Palo Alto, CA jobs
          2048 Game Pro for Windows PC Desktop v1.5.0.0 released   
2048 Game Pro for Windows PC Desktop Screen shot

This is a professional open source application for Windows personal computer (PC) desktop (not mobile). The game includes playing fields 4x4, 8x8, 16x16 with automatic game savings at each motion. My program allows you to grow your own kitty to a mature c


Download8 MB | Windows | Freeware | Buy Now

          bacula-web/bacula-web   
The open source web based reporting and monitoring tool for Bacula
          Calibre Portable 3.2.1   
Description:Calibre is a free and open source e-book library management applicationCalibre is developed by users of e-books for users of e-books. It has a cornucopia of features divided into the following main categories:Library ManagementCalibre manages your e-book collection for you. It is designed around the concept of the logical book, i.e., a single entry in […]
          Zekr   
Zekr is an open platform for research on the Holy Quran. It is a Quran based project, planned to be a universal, open source, cross-platform application to perform most of the usual refers to Quran. The main idea is to build an as generic as possible platform to be capable of having different add-ins for its tasks. Tag:Quran Research on Quran Search the Quran
          Senior Linux Storage Software Engineer - RSD - Intel - Hillsboro, OR   
Able to work directly with external companies, open source communities and across business units within Intel....
From Intel - Sat, 24 Jun 2017 10:26:17 GMT - View all Hillsboro, OR jobs
          South Africa team in semi-finals of multi-million global XPRIZE competition   
Pretoria – A South African team – Leap to Know – is one of 11 teams that have advanced to the semi-finals of the $15 million Global Learning XPRIZE, one of the largest education competitions in the world to date. The competition challenges teams from all over the world to develop an open source and […]
          Senior Data Architect - Stem Inc - San Francisco Bay Area, CA   
Help design, develop and implement a resilient and performant distributed data processing platform using open source Big Data Technologies....
From Stem Inc - Tue, 27 Jun 2017 05:52:01 GMT - View all San Francisco Bay Area, CA jobs
          Is the Arduino STAR OTTO open source? @ST_World @ArduinoOrg @ST_News @arduino   
Arduino STAR – OTTO. The board was available at Maker Fair in May of 2017 and according to the arduino.org Twitter account “it soon will be available from distributors.“. Searching on Ocotopart, all there is a PDF from STM from 2016, when it was announced, so far we could not find which distributors will stock […]
          Wordpress Website Development Company in Delhi, Noida, Gurgaon (Delhi)   
Call @ 9999770566, Again a scalable open source CMS is at your service. Have you ever considered the uniqueness of Wordpress, the framework known for its numerous advantages? Yes, it accomplishes the digital transformation you had thought for your busines...
          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          BadBoy: Spam Blocker & Reporter (v7.2.223)   
Change Log:
--------------------
BadBoy
v7.2.223 (2017-07-02)
Full Changelog Previous releases

anti-spam update


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support me in the daily fight against spam on Patreon!

BadBoy is open source and development is done on GitHub. You can contribute code, localization, and report issues or spam there: https://github.com/funkydude/BadBoy

Why so many updates?
BadBoy has become popular, spammers have taken notice. As soon as new updates come out, spammers attempt to get around the new filters. People will either complain about missing spam or too many updates, it's not a situation where I can make both sets of people happy. I highly recommend using the WoWI Minion addon updater.

BadBoy Highlights:

Blocking & reporting spam: gold, hack, phishing, account trading, runescape gold trading, casino, illegal item selling, etc...
Spam is removed from: LFG tool, Chat, Chat bubbles.
A 20 line chat buffer/throttle to prevent people spamming the same message one after another.
Configuration screen by typing /badboy

The following are not scanned and cannot be reported/blocked by this addon: Friends, Real ID friends, Guild, Party, Raid, GM's.
The following channels are monitored: Trade, General, Yell, Whispers.

Please post missed spam on GitHub.

Plugins

Simple and infinite ignore list BadBoy_Ignore.
Block chat by typing in keywords BadBoy_CCleaner.
Remove guild advertising BadBoy_Guilded.
Block whispers by player level [url=h...
          BigWigs Bossmods (v62)   
Change Log:
--------------------
BigWigs
v62 (2017-07-01)
Full Changelog Previous releases

TombOfSargeras/MaidenofVigilance: Change locale use for Mythic Orbs
TombOfSargeras/MaidenofVigilance: Add Mythic Ability timers
bump version
TombOfSargeras/FallenAvatar: Use CLEU for stage 2 detection.
Loader: More improvements to version warning
Loader: Cleanup version warning code, locales need verified.
Add option and icons for Demonic Inquisition Echoing Anguish targets (#292)
TombOfSargeras/DemonicInquisition: Add Echoing Anguish target marking
TombOfSargeras/SistersoftheMoon: Play a sound for players not targetted by the Twilight Glaive when it happens, closes #289
Core/BossPrototype: TargetMessage sound is optional
TombOfSargeras/Sasszine: Remove unused (leftover) table.
TombOfSargeras/Sasszine: Mythic Updates, Add counter to Burden of Pain
TombOfSargeras/MaidenofVigilance: Fix chat error.
TombOfSargeras/Kiljaeden: Fix Armageddon and Singularity on normal mode.
Update deDE.lua (#288)
TombOfSargeras/Goroth: Tweak mythic Burning Armor timer.


Description:
--------------------
https://cdn-wow.mmoui.com/preview/pvw68420.png Please support my work on Patreon!

BigWigs is open source and development is done on GitHub. You can contribute code, localization, and report issues there: https://github.com/BigWigsMods/BigWigs

BigWigs is a boss encounter add-on. It consists of many individual encounter scripts, or boss modules; mini add-ons that are designed to trigger alert messages, timer bars, sounds, and so forth, for one specific raid encounter. These encounter scripts are activated when you target or mouse over a raid boss, or if any other BigWigs user in your raid does. In most cases only one module will be running at any given time.

If you're looking for boss encounter scripts for 5-man dungeons, these are not in Big Wigs, but live in their own add-on; LittleWigs.
          Older FOSS News (Catchup)   
  • [Older] Analysts predict perfect storm of innovation, courtesy of open source

    As the $148 billion cloud market continues to grow at a rate of 25 percent annually, the open-source community can take much responsibility for the adoption and innovation driving businesses to go all in on the cloud, according to Krish Subramanian (pictured), founder and principal analyst at Rishidot Research LLC.

    “I would even go one step further and say open source is completely disrupting the traditional enterprise software in modern business,” Subramanian said.

  • [Older] Open Source Codecs Pave Way for High-Resolution Streaming Video

    First, some background: The video compression standard H.264, also known as AVC (Advanced Video Coding), has been the workhorse codec for broadcasters, internet streamers and video producers around the world for the past decade. Users can see what codec is being used to compress video on YouTube by right-clicking on any video and selecting “Stats for nerds.”

  • [Older] Block.one Preps Open Source Blockchain Operating System

    Most IT organizations are a long way from deploying applications based on blockchain technologies into a production environment. But many of them are encouraging developers to build prototypes of applications that employ distributed ledgers based on blockchain technologies.

    To facilitate those efforts, block.one announced it is developing an open source instance of what it describes as a blockchain operating system dubbed EOS. Company CEO Brendan Blumer says an open source approach will give developers a way to build blockchain applications that are not based on the number of transactions processed using a blockchain cloud service.

  • [Older] Yahoo fuels open source speedway with Daytona, looks to automate application analysis

    Daytona – not the Florida city famous for its annual NASCAR race, but Yahoo’s latest open source offering which aims to maximise application throughput.

    Daytona is an open-source framework for automated performance testing and analysis, which users can deploy as a hosted service in public cloud or on-premise.

    The key selling point of Daytona is its simple, unified user interface, in which users can test and analyse the performance of any application. This allows users to focus on performance analysis without changing context across various sources and formats of data.

  • [Older] AppNexus and Unruly launch open-source video header bidding solution
  • [Older] AppNexus & Unruly Launch Open-Source Pre-bid Solution For Outstream Video
  • [Older] Google Is Open Sourcing Firebase SDKs for App Back-End Services
  • [Older] What Do Open Source and DevOps Have in Common? A Lot, Actually
  • [Older] [Paywall] 6 free and feature-filled open source project management tools
  • [Older] Spinnaker 1.0 Open-Source App Release Management Platform Debuts
  • [Older] Google hoists Spinnaker for continuous delivery
  • [Older] Google Releases New Version of Spinnaker Cloud Code Update Platform

    Google has released a new version of Spinnaker, an open-source software release management platform for deploying application code to the cloud.

    Video streaming giant Netflix originally developed the technology to enable continuous delivery of software updates to its hosted applications and services on Amazon's cloud platform.

  • [Older] Open-source software for satellite deformation monitoring

    PyRate is open source Python software for collating and analysing Interferometric Synthetic Aperture Radar (InSAR) displacement time series data.

  • [Older] LanguageTool is an open-source proof reader for 25+ languages

    LanguageTool is an open-source spelling and grammar checker for Chrome, Firefox, the desktop (via Java) and more.

    The browser extensions enable checking the text you’re entering a web text box, or any other selectable text on a web page. The system works much like other spell checks. Enter text, click the LanguageTool icon and it instantly displays a report listing any issues. Browse the list, click any corrections you’d like to accept and it’s updated in the source text.

    If you don’t want to apply a particular rule, you can turn it off from the report with a single click. Similarly, you’re able to add special words to a personal dictionary so they won’t be flagged as misspelled.

  • [Older] What is Open Source?

    Another popular application of open source technology is in Linux. Operating systems like Ubuntu, Fedora and Linux Mint use open source software licenses, and are modified and updated regularly by their user communities. All Linux -based operating systems are offered free of charge, offering an attractive alternative to expensive Windows licenses.

  • [Older] The biggest misconception about open source? It's free

    When companies start looking toward open source, there is a misconception that the technology is free, according to Lisa Caywood, director of ecosystem development at the OpenDaylight Project, The Linux Foundation, speaking Tuesday at Interop ITX in Las Vegas. Though core components are freely accessible, companies still have to build, test and integrate open source solutions at scale.

  • [Older] Five Ways MSPs Can Add Value to Free and Open Source Software

    In other words, if you're an MSP, you should understand how open source code – which is usually (but not always) given away for free – can be leveraged to provide products or services that people are willing to pay for.

  • [Older] SNAS open source networking project captures BGP telemetry

    Conry-Murray pointed out that SNAS is hardly a new effort. Instead, he said it is a renaming of the OpenBMP project, which was first developed by Cisco and later released under an Eclipse license as an open source networking system. The real-time topology information is aimed at improving visibility and understanding of the state of the network to boost security and performance. Data can be collected using an x86 server and stored in a MySQL database, which is part of the SNAS package. The program parses and sorts data using protocol headings and makes it accessible via APIs.

  • [Older] Impact of Open Source Technology on Analytics

    With the help of open source analytics, companies are able to improve the project by contributing to it, adding advanced features at their will, and collectively moving the platform forward. One of the main reasons why software vendors choose open source platforms is to be independent. Most of the times vendors and their platforms are well supported while in contract, but if the firm wants to move on, the relationship gets tarnished. With an open source platform, enterprises can be independent from vendor’s proprietary software stack. It also allows them to be part of a community.

  • [Older] Open Source in Business Intelligence

    Most advantages associated with the open source product category, generally hold good for analytics, as well. The ease of downloading, absence of licensing or even the scope for customizing source code to suit the needs of enterprise, apply for the analytics product domain too. The absence of license costs simplifies the task of building prototypes and testing with minimum investment.

  • [Older] Open-source ubiquitous at DCD>Webscale

    Lacking in enterprise credentials just a few years ago, today open-source technology standards are rapidly becoming acceptable when designing, building and operating digital infrastructure. Whether in the form of the Open Compute Project (OCP), OpenStack, IBM’s OpenPOWER, or others, open-source standardization and commoditization of the “factory of the future” is now an accepted evolutionary path.

  • [Older] Elementary OS is trying to create a business model for open source app developers

    What makes elementary OS apart from the rest of the crowd is their attention to details and polish. It comes naturally as the team behind elementary OS comes from a graphic design background, so their approach towards desktop Linux is to use a stable base of Ubuntu LTS and create an experience that matches the gloss and polish of macOS.

    The Elementary OS team has released a new version of the OS, code-named Loki. In addition to newer kernel (4.8), and improvement in every component of the operating system the most notable feature of the release is AppCenter.

    In a previous interview Daniel Fore, the founder of the project, told me about his vision to create a platform for third party application developers where they cannot only reach out to more users through a store, but also monetize from their work.

  • [Older] Open Source Lab inaugurated at VVCE

    The Open Source Lab is setup with a vision to create a community of excellent programmers and increase awareness about open source.Open Source Lab is open to all students and faculties of VVCE and will function as a library for open source software and hardware. During the event, the students of IV semester, CSE demonstrated one of the innovative project “Remote Display”, developed using the Raspberry Pi 3 platform available in the lab, which is implemented with the objective of displaying instant news, messages, images and videos on the remote display.

  • [Older] Open Source Lab at VVCE
  • [Older] Is There Life After Open Source?

    It's not like we don't have a lot of open-source successes out there. Linux, which is open source, has dominated the server market for years. OpenDaylight and OpenStack are huge in software-defined networking and the cloud, respectively. AT&T's software for network functions virtualization (NFV), called Enhanced Control, Orchestration, Management and Policy, or ECOMP, is now stealing the limelight from NFV vendors, and plenty of startups would like to be the "new Red Hat." The challenge is that open source changes the whole supply-and-support relationship, and that means it could change the whole tech business model.

    [...]

    A shift to an open-source model with community support has to somehow address that reality. If it does, we could see a true open-source revolution. If not, we may end up reinventing "products" and "vendors."

  • [Older] Sprint unveils C3PO for open source NFV/SDN
  • [Older] The importance of and open-source Network Operations System

    Linux-based NOS offering freedom of innovation whilst maintaining stability and minimising vulnerability.

  • [Older] Sprint Builds its 5G Clout Through Open Source, NYU Affiliation
  • [Older] Open Sores: Are Telcos on a Collision Course With Vendors?

    But companies that have thrived by selling proprietary technology have much to lose from this transition. And not all accept that open source will inevitably run riot. "I find it hard to see that very large portions of software in the telco industry will be open sourced because, ultimately, if there are no vendors then every operator has to build its own system," said Ulf Ewaldsson, the head of digital services for Sweden's Ericsson AB (Nasdaq: ERIC), during a recent conversation with Light Reading. "There is a tendency to think about doing that, but for the majority it is not close to being an option." (See Ericsson's Ewaldsson Takes Aim at Telco 'Conservatism'.)

  • [Older] Oracle delays Java 9, modularity issues blamed

    Java 9 had been expected to drop by July of this year in 2017.

  • [Older] CoreOS chief decries cloud lock-in

    CoreOS CEO Alex Polvi spent his morning on Wednesday biting the hands that fed attendees at his company's conference, CoreOS Fest 2017.

    "Every shift in infrastructure that we've seen ... has promised more efficiency, reliability and agility," said Polvi. "But every single one has resulted in a massive proprietary software vendor that has undermined all the work done in the free software community. And we're beginning to believe cloud is looking the same."

  • [Older] IBM, Google, Lyft launch Istio open source microservices platform

    IBM, Google and Lyft joined forces on Istio, an open source microservices platform that connects and manages networks of microservices, regardless of their source or vendor.

  • [Older] How open-sourcing your code base can kickstart growth

    The main driver of Stream’s growth might sound somewhat surprising. “The open-source community is by far our biggest source of traffic. It is key for the growth of Stream, as we have quite a complex product. I actually already knew Thierry via his open-source libraries before we met in real life,” says Barbugli.

    To accelerate their growth, Stream puts a lot of effort into creating example apps and distributing these in the communities.

  • [Older] Benefits of an open source approach to IoT application enablement [Ed: No, proprietary and commercial are not the same thing]

    Open source AEPs have some of distinct advantages over commercial [sic] AEPs.

  • [Older] The Great OpenStack Delusion – how open source cloud infrastructure can overcome a crisis

    Canonical founder Mark Shuttleworth delivers some tough love in his assessment of OpenStack and what needs to happen to get it out of a crisis.

  • [Older] The evolution of OpenStack: Where next for the open source cloud platform?

    In the case of OpenStack, and its pool of contributors and supplier partners, any hint of a company opting to downsize their involvement is often seized upon by industry watchers as a sign the wheels are coming off the open source cloud juggernaut.

  • [Older] As open-source adoption skyrockets in enterprise, Linux addresses ease of use

    Joshipura explained how discriminating the organization is with each project it works on. From setting up the requirements to the architecture, Linux provides an explicit definition of the end user’s use case to the community. Linux facilitates the design work, architectural leadership, inter-project cross-leadership in an actively managed, sustainable ecosystem.

  • [Older] Spinnaker, an open-source project for continuous delivery, hits the 1.0 milestone

    Google announced the 1.0 release of Spinnaker, which was originally developed inside Netflix and enhanced by Google and a few other companies. The software is used by companies like Target and Cloudera to enable continuous delivery, a modern software development concept that holds application updates should be delivered when they are ready, instead of on a fixed schedule.

  • [Older] Chef Automate for application automation in cloud-native container-first
  • [Older] Chef tightens the links between Chef Automate and its open-source DevOps products
  • Second update from summer training 2017

    We are already at the end of the second week of the dgplug summer training 2017. From this week onwards, we’ll have formal sessions only 3 days a week.

  • openbsd changes of note 624
  • “Absolute FreeBSD 3rd Edition” update
  • [Older] Initial Artifex Ruling Is A Victory For Open-Source Software
  • How to apply traditional storage management techniques to open-source tech
  • [Older] Open-source approach provides faster, better solubility predictions

    Predicting solubility is important to a variety of applications. In the pharmaceutical field, for example, it is crucial to know the solubility of a drug since it directly determines its availability to the body. The petroleum industry provides another example: Substances with low solubility can form scales or unwanted deposits in pipes or on drills, causing blockages and other big problems.

  • [Older] PrismTech Announces Availability of Open Source DDS Community Edition v6.7 Software
  • [Older] VN plans interactive, open-for-all web knowledge base

    Việt Nam will soon have its own "Wikipedia" page, maybe even more interactive, developed by Vietnamese people for Vietnamese people.

    The page is a proactive, interactive effort to spread knowledge and awareness of scientific and technological developments, promoting education resources and sci-tech creativity in the country.

  • [Older] Open source textbooks help keep college affordable

    Keeping college education affordable is a guiding principle at Dalton State College, and one key way faculty members contribute is by collaborating to create open educational resources for their students, allowing them to avoid buying costly textbooks for some classes.

  • Launch Of Open Access Book On Geographical Indications In Asia-Pacific

    A new book launched this week in Geneva offers a unique compilation of the challenges and promises of the protection of geographical indications (GIs) with a particular focus on countries in the Asia-Pacific region.

    We should “not romanticise GIs,” but we need to be “very pragmatic and practical” and “a bit more sceptical,” Irene Calboli, professor at the Management University of Singapore, said at the launch.

    Calboli presented on 27 June the launch of the book Geographical Indications at the Crossroads of Trade, Development, and Culture. Focus on Asia-Pacific at the World Trade Organization. The book, co-edited by Calboli and Wee Loong Ng-Loy, professor at the National University of Singapore, is available by open access, as a contribution to the global body of knowledge on the subject.

  • [Older] Sweet dreams: Eclipse creates IoT Open Testbeds

    Open source software lifecycle group the Eclipse Foundation has laid down additional cornerstones that it is hoping will bring more unity and compatibility to the IoT.

  • SDL2 Brought To QNX 7.0

    For fans of the QNX operating system, SDL2 mainline can now run on QNX 7.0.

    There's been past QNX + SDL work while now the latest mainline SDL2 code can work with QNX 7.0, which was released by BlackBerry earlier this year. The support landed this weekend in the Simple DirectMedia Layer with this Git commit.


          Web Browsers and Blockchain   
  • Mozilla Rolls Out First Firefox 54 Point Release to Fix Netflix Issue on Linux

    More than two weeks after Mozilla unveiled Firefox 54 as the first branch of the web browser to use multiple operating system processes for web page content, we now see the availability of the first point release.

    Mozilla Firefox 54.0.1 was first offered to the stable release channel users on June 29, 2017, and, according to the official release notes, it fixes a Netflix issues for users of Linux-based operating systems, addresses a PDF printing issue, and resolves multiple tab-related issues that have been reported from Firefox 54.0.

  • The Top Four Open-Source Blockchain Projects in Media

    1. Brave Web Browser

    Once upon a time, getting users to pay attention to ads on webpages was the biggest problem facing online marketers. Today, that challenge has grown even more daunting. Convincing users not to block online ads entirely has become a major task in online media.

    Brave is an open-source web browser that gives users the option to block the ads that they would normally see when they visit a website. If the user so chooses, Brave replaces those blocked ads with ones tailored to a user's preferences. The browser gives the users a slice of the advertising revenue from the tailored ads. By paying users to view ads tailored to them, Brave delivers a better user experience, while also making it easier for advertisers to reach qualified leads through online ads.

    Blockchain technology enters the picture in two ways. First, Bitcoin is used to facilitate financial transactions between Brave and its advertising partners and users. Second, Brave uses the Bitcoin ledger to store data about user browsing behavior. This eliminates the need for a centralized database where specific users' behavior would be linked to their names. Instead, browsing behavior remains anonymous and essentially un-hackable.

  • Blockstack: An Open Source Browser Powered By Blockchain For Creating A New Internet

    Blockstack, a blockchain startup, has released a decentralized browser to make an internet that would be free from dependence on large organizations and key players. The makers of Blockstack browser have called it the Netscape of the decentralized internet for running and making apps. A developer release of Blockstack browser is available, and a user version will arrive in six months.

  • Colu Launches Bankbox, an Open-Source Protocol to Help Banks Issue Digital Currencies
  • BloqLabs from Bloq goes live to connect enterprises with open source blockchain projects
  •  

  • Bloq Launches BloqLabs to Bring Open Source Blockchain Technologies to Enterprise

    Bloq, a leader in the development of enterprise-grade Blockchain solutions, has launched BloqLabs to expand its ongoing sponsorship and support of critical open source projects in the bitcoin and Blockchain ecosystems.

  • [Older] Blockchain pioneers back open source code, Greenwich Associates

    81% view permissioned blockchains as inherently more secure than public blockchains. “In the end, a blockchain-enabled financial market will likely consist of a core plumbing of market infrastructure developed by the open source community, operating beneath proprietary applications that provide a higher level of security,” says Johnson.


          Openwashing   

          FOSS Databases: Older News   
  • [Older] Crate.io Introduces CrateDB 2.0 Enterprise and Open Source Editions
  • [Older] CrateDB 2.0 Enterprise stresses security and monitoring—and open source

    When open source SQL database CrateDB first debuted, its professed mission was to deliver easy, fast analytics on reams of machine-generated data, while running in containerized, cloud-native environments.

    That mission hasn't changed with the release of version 2.0, but it has been expanded by way of an enterprise edition with pro-level features. Rather than distribute the enterprise edition as a closed-source, binary blob, the maker of CrateDB is offering it as open source to help speed uptake and participation.

  • [Older] New open source database designed for enterprise users

    Businesses are looking for database technology that increases their agility, scalability, security, and supports a range of different use cases, at the same time keeping down costs.

    On the other hand developers want a database that is open and extensible, and lets them easily develop many different types of application.

    Open source specialist MariaDB Corporation is looking to meet these conflicting demands with MariaDB TX 2.0, an open source transactional database solution for modern application development.

  • [Older] IBM's new platform readies open source databases for private cloud
  • [Older] IBM announces open source DBaaS on Power Systems

    Database as a Service solutions are on the rise. IBM is looking to take advantage of that and build momentum as the launch of POWER9 gets closer. The announcement will also appeal to many in the OpenStack community  especially those running OpenStack-based private clouds. It will be interesting to see how many of the other OpenStack distributions begin to offer this on their platforms.


          Make up your mind on trump's accusations of a "chemical weapons attack" in Syria   

Trump’s Sarin Claims Built on ‘Lie’



Here is some Sunday reading.

I am reposting the discussion from yesterday between Don DeBar and mark Sleboda on CPR News on the Trumps’ accusations of a chemical weapons attack.

While the Americans backed down almost immediately it acts as an open invitation for al-Qaeda or other terrrorist entity to carry out a false flag attack.


Mark Sleboda recommended four sources so that people can make up their own minds.







United States Ambassador to the United Nations, Nikki Haley has pre-warned Assad, about a pre-planned chemical attack that he is putting together, which pre-blames Assad, Iran and Russia.


Any further attacks done to the people of Syria will be blamed on Assad, but also on Russia & Iran who support him killing his own people.

U.S. says its warning appears to have averted Syrian chemical attack


U.S. Defense Secretary Jim Mattis said on Wednesday that the Syrian government of President Bashar al-Assad appeared so far to have heeded a warning this week from Washington not to carry out a chemical weapons attack.


Seymour Hersh's explosive piece was turned down by the London Review of Books as well as as by US media so was published in the German newspaper, die Welt - an indication of a McCarthyite policy of repressing anything that goes aganst the official neo-con narrative.



Trump‘s Red Line





President Donald Trump ignored important intelligence reports when he decided to attack Syria after he saw pictures of dying children. Seymour M. Hersh investigated the case of the alleged Sarin gas attack.

Will Get Fooled Again – Seymour Hersh, Welt, and the Khan Sheikhoun Chemical Attack

25 June, 2017

On June 25th 2017 the German newspaper, Welt, published the latest piece by Seymour Hersh, countering the “mainstream” narrative around the April 4th 2017 Khan Sheikhoun chemical attack in Syria. The attack, where Sarin was allegedly used against the local population, dropped in a bomb by the Syrian Air Force, resulted in President Trump taking the decision to launch cruise missiles at a Syrian airbase.

As with his other recent articles, Hersh presented another version of events, claiming the established narrative was wrong. And, as with those other recent articles, Hersh based his case on a tiny number of anonymous sources, presented no other evidence to support his case, and ignored or dismissed evidence that countered the alternative narrative he was trying to build.
This isn’t the first chemical attack in Syria which Hersh has presented a counter-narrative for, based on a handful of anonymous sources. In his lengthy articles for the London Review of Books, “Whose sarin?” and “The Red Line and the Rat Line”, Hersh made the case that the August 21st 2013 Sarin attack in Damascus was in fact a false flag attack intended to draw the US into the conflict with Syria. This claim fell apart under real scrutiny, and relied heavily on ignoring much of the evidence around the attacks, an ignorance of the complexities of producing and transporting Sarin, and a lack of understanding about facts firmly established about the attacks.
With Hersh’s latest article, this pattern of behaviour is repeated. The vast majority of the article appears to be based on an anonymous source, described as “a senior adviser to the American intelligence community, who has served in senior positions in the Defense Department and Central Intelligence Agency”. As with his earlier articles, details of the attack as described by his source flies in the face of all other evidence presented by a range of other sources.
So what scenario does Hersh’s source describe, and how does this contradict other claims? Hersh claims that “Syrians had targeted a jihadist meeting site on April 4 using a Russian-supplied guided bomb equipped with conventional explosives”, and this attack resulted in the release of chemicals, including chlorine, but not Sarin, that produced the mass casualty event seen on April 4th. Hersh’s source is able to provide a great deal of information about the target, claiming intel on the location was shared with the Americans ahead of the attack.
Hersh’s source describes the building as a “two-story cinder-block building in the northern part of town”, with a basement containing “rockets, weapons and ammunition, as well as products that could be distributed for free to the community, among them medicines and chlorine-based decontaminants for cleansing the bodies of the dead before burial”. According to Hersh’s source, the floor above was “an established meeting place” and “a long-time facility that would have had security, weapons, communications, files and a map center.”
The source goes on to claim that Russia had been watching the location carefully, establishing its use as a Jihadi meeting place, and watching the location with a “drone for days”, confirming its use and the activity around the building. According to the source the target was then hit at 6:55am on April 4th, and a Bomb Damage Assessment by the US military determined that a Syrian 500lb bomb “triggered  a series of secondary explosions that could have generated a huge toxic cloud that began to spread over the town, formed by the release of the fertilizers, disinfectants and other goods stored in the basement, its effect magnified by the dense morning air, which trapped the fumes close to the ground.”
At this point it’s worth taking a look at the claims the Syrian and Russian governments made in response to accusations that Syria had dropped Sarin on Khan Sheikhoun. Walid Muallem, Syria’s Foreign Minister, stated in a press conference two days after the attack that the first air raid was conducted at 11:30am local time, attacking “an arms depot belonging to al-Nusra Front chemical weapons”. It was noted by observers at the time the time of the claimed attack was hours after the first reports of casualties came in, and both contradicts the 6:55am stated by Hersh’s source, and the slightly earlier time provided by the Pentagon, approximately between 6:37am and 6:46am local time. Not only that, but the Syrian Foreign Minister also described the target as a chemical weapons arm depot, not a meeting place that stored other items in the basement.
Russia also published their own claims about the attack. Sputnik reported the following:
According to Konashenkov, on Tuesday “from 11.30 to 12.30, local time, [8.30 to 9.30 GMT] Syrian aircraft conducted an airstrike in the eastern outskirts of Khan Shaykhun on a large warehouse of ammunition of terrorists and the mass of military equipment”.
Konashenkov said that from this warehouse, chemical weapons’ ammunition was delivered to Iraq by militants.
Konashenkov added that there were workshops for manufacturing bombs, stuffed with poisonous substances, on the territory of this warehouse. He noted that these munitions with toxic substances were also used by militants in Syria’s Aleppo.”
These claims are consistent with the claims of their Syrian ally, but not the claims made by Hersh and his source. In the face of allegations of chemical weapon use neither Russia nor Syria mention targeting “a jihadist meeting site”, and described the location as a “large warehouse” on the “eastern outskirts of Khan Shaykhun”, not a “two-story cinder-block building in the northern part of town” with “security, weapons, communications, files and a map center.” In fact, the only thing Hersh’s account and the Russian and Syria account agrees on is it was a Syrian aircraft which conducted the attack.
In addition to this, neither Syria nor Russia presented any evidence to support their claim. If, as Hersh claims, Russia had been observing the site with a “drone for days” then they would not only have the precise location of the site, but footage of the site. However, both Syria and Russia have failed to make any imagery of the site public, nor have they provided any specific details about the location of the site. If they had, it would be possible to easily check if the location had been bombed on Terraserver, which has satellite imagery of Khan Sheikhoun before and after the date of attack. In common with Russia and Syria, Hersh’s source seems unable to provide the exact location of the attack, despite his apparent in depth knowledge of the attack.
Ignoring the fact that the version of events presented by Hersh runs counter to narratives produced by all sides, the claims around the chemical exposure are also worth examining. Hersh refers to “a Bomb Damage Assessment (BDA) by the U.S. military” of the strike, which he provides no source for, which supposedly states “a series of secondary explosions that could have generated a huge toxic cloud that began to spread over the town, formed by the release of the fertilizers, disinfectants and other goods stored in the basement”. He describes the symptoms seen in victims as “consistent with the release of a mixture of chemicals, including chlorine and the organophosphates used in many fertilizers, which can cause neurotoxic effects similar to those of sarin.” Here it is worth pointing out that organophosphates are used as pesticides, not fertilizers, and it’s unclear if this error is from Hersh himself or his anonymous source. This is not the only factual error in the report, with Hersh stating an SU-24 was used in the attack, not an SU-22 as claimed by every other source, including the US government.
Despite Hersh’s apparent belief Sarin was not used in the attack, other sources disagree, not least the OPCW, tasked to investigate the attack. On April 19th 2017 the OPCW published a statement by Director-General, Ambassador Ahmet Üzümcü describing the results of the analysis of samples taken from victims of the attack, both living and dead, stating:
The results of these analyses from four OPCW designated laboratories indicate exposure to Sarin or a Sarin-like substance. While further details of the laboratory analyses will follow, the analytical results already obtained are incontrovertible.”
later report from the OPCW, dated May 19th, provided further analysis of samples from the site, including dead animals recovered from the site, and environmental samples. Signs of Sarin or Sarin-like substances were detected in many samples, as well as Sarin degradation products, and at least two samples which state Sarin itself was detected.
These results are also consistent with intelligence published by the French government, which describes the following:
The analyses carried out by French experts on the environmental samples collected at one of the impact points of the chemical attack at Khan Sheikhoun on 4 April 2017 reveal the presence of sarin, of a specific secondary product (diisopropyl methylphosphonate – DIMP) formed during synthesis of sarin from isopropanol and DF (methylphosphonyl difluoride), and hexamine. Analysis of biomedical samples also shows that a victim of the Khan Sheikhoun attack, a sample of whose blood was taken in Syria on the very day of the attack, was exposed to sarin.”
Based on this and other reports, multiple sources state Sarin was used in the attack, despite Hersh’s narrative of an accidental chemical release. The fact Hersh does not refer to any of these reports seems to, at best, overlook key information about the nature of the attack, and at worst, purposely ignores information that contradicts the narrative he’s attempting to build.
Going back to the attack site, this ignoring or ignorance of contradictory information is also apparent. Open source material from the day of the attack, as well as satellite imagery analysis by various sources (including this excellent piece by the New York Times) consistently point to the same impact sites, one of which is the specific crater claimed to be the source of Sarin released on the day of the attack. None of these point to the structure described by Hersh, nor is there any evidence of a site as described by Hersh being attacked. Journalists visited the town soon after the attack, and made no mention of the site as described by Hersh.
One might argue that all the individuals and groups on the ground, all the doctors treating the victims, and every single person spoken to by the journalists visiting the site failed to mention the site described by Hersh, but there’s a very simple way to clear up this matter. Anyone can access satellite imagery of the town before and after the date of the attack thanks to the imagery available on Terraserver, all Hersh’s source has to do is provide the coordinates of the building attacked and anyone with an internet connection will be able to look at that exact location, and see the destroyed building. A simple way for both Hersh and Welt to preserve their reputations.


Scott Ritter who was demonised in the lead-up to the 2003 invasion of Iraq for saying that were no weapons of mass destruciton knows weapons of mass destruction like no other person on the planet – certainly better than Bellingcat!

Ex-Weapons Inspector: Trump’s Sarin Claims Built on ‘Lie’

Scott Ritter takes on White House Syria attack claims.

By SCOTT RITTER
Sarin gas victim in Syria, as reported in April 2017. | Ninian Reid / Flickr


29 June, 2017


On the night of June 26, the White House Press Secretary released a statement, via Twitter, that, “the United States has identified potential preparations for another chemical weapons attack by the Assad regime that would likely result in the mass murder of civilians, including innocent children.”  The tweet went on to declare that, “the activities are similar to preparations the regime made before its April 4 chemical weapons attack,” before warning that if “Mr. Assad conducts another mass murder attack using chemical weapons, he and his military will pay a heavy price.”

A Pentagon spokesman backed up the White House tweet, stating that U.S. intelligence had observed “activity” at a Syrian air base that indicated “active preparation for chemical weapons use” was underway.  The air base in question, Shayrat, had been implicated by the United States as the origin of aircraft and munitions used in an alleged chemical weapons attack on the village of Khan Sheikhun on April 4.  The observed activity was at an aircraft hangar that had been struck by cruise missiles fired by U.S. Navy destroyers during a retaliatory strike on April 6.

The White House statement comes on the heels of the publication of an article by Pulitzer Prize-winning investigative journalist Seymour Hersh in a German publication, Die Welt, which questions, among many things, the validity of the intelligence underpinning the allegations leveled at Syria regarding the events of April 4 in and around Khan Sheikhun. (In the interests of full disclosure, I had assisted Mr. Hersh in fact-checking certain aspects of his article; I was not a source of any information used in his piece.)  Not surprisingly, Mr. Hersh’s article has come under attack from many circles, the most vociferous of these being a UK-based citizen activist named Eliot Higgins who, through his Bellingcat blog, has been widely cited by media outlets in the U.S. and UK as a source of information implicating the Syrian government in that alleged April chemical attack on Khan Sheikhun.

Neither Hersh nor Higgins possesses definitive proof to bolster their respective positions; the latter draws upon assertions made by supposed eyewitnesses backed up with forensic testing of materials alleged to be sourced to the scene of the attack that indicate the presence of Sarin, a deadly nerve agent, while the former relies upon anonymous sources within the U.S. military and intelligence establishments who provide a counter narrative to the official U.S. government position. What is clear, however, is that both cannot be right—either the Syrian government conducted a chemical weapons attack on Khan Sheikhun, or it didn’t.  There is no middle ground.

The search for truth is as old as civilization. Philosophers throughout the ages have struggled with the difficulties of rationalizing the beginning of existence, and the relationships between the one and the many. Aristotle approached this challenge through what he called the development of potentiality to actuality, which examined truth in terms of the causes that act on things. This approach is as relevant today as it was two millennia prior, and its application to the problem of ascertaining fact from fiction regarding Khan Sheikhun goes far in helping unpack the White House statements regarding Syrian chemical preparations and the Hersh-Higgins debate.

According to Aristotle, there were four causes that needed to be examined in the search for truth — material, efficient, formal and final. The material causerepresents the element out of which an object is created. In terms of the present discussion, one could speak of the material cause in terms of the actual chemical weapon alleged to have been used at Khan Sheikhun. The odd thing about both the Khan Sheikhun attack and the current White House statements, however, is that no one has produced any physical evidence of there actually having been a chemical weapon, let alone what kind of weapon was allegedly employed. Like a prosecutor trying a murder case without producing the actual murder weapon, Syria’s accusers have assembled a case that is purely circumstantial — plenty of dead and dying victims, but nothing that links these victims to an actual physical object.   

Human Rights Watch (HRW), drawing upon analysis of images brought to them by the volunteer rescue organization White Helmets, of fragments allegedly recovered from the scene of the attack, has claimed that the material cause of the Khan Sheikhun event is a Soviet-made KhAB-250 chemical bomb, purpose-built to deliver Sarin nerve agent. There are several issues with the HRW assessment. First and foremost, there is no independent verification that the objects in question are what HRW claims, or that they were even physically present at Khan Sheikhun, let alone deposited there as a result of an air attack by the Syrian government.  Moreover, the KhAB-250 bomb was never exported by either the Soviet or Russian governments, thereby making the provenance of any such ordinance in the Syrian inventory highly suspect.

Sarin is a non-persistent chemical agent whose military function is to inflict casualties through direct exposure. Any ordnance intended to deliver Sarin would, like the KhAB-250, be designed to disseminate the agent in aerosol form, fine droplets that would be breathed in by the victim, or coat the victim’s skin. In combat, the aircraft delivering Sarin munitions would be expected to minimize its exposure to hostile fire, flying low to the target at high speed. In order to have any semblance of military utility, weapons delivered in this fashion would require an inherent braking mechanism, such as deployable fins or a parachute, which would retard the speed of the weapon, allowing for a more concentrated application of the nerve agent on the intended target.

Chemical ordnance is not intended for precise strikes against point targets, but rather delivery of the agent to an area. For this reason, they are not dropped singly, but rather in large numbers. (The ab-250, for instance was designed to be delivered by a TU-22 bomber dropping 24 weapons on the same target.) The weapon itself is not complex—a steel bomb casing with a small high explosive tube—the burster charge—running down its middle, equipped with a nose fuse designed to detonate on contact with the ground or at a pre-determined altitude. 

Once detonated, the burster charge causes the casing to break apart, disseminating fine droplets of agent over the target. The resulting explosion is very low order, a pop more than a bang—virtually none of the actual weapon would be destroyed as a result, and its component parts, readily identifiable as such, would be deposited in the immediate environs. In short, if a KhAB-250, or any other air delivered chemical bomb, had been used at Khan Sheikhun, there would be significant physical evidence of that fact, including the totality of the bomb casing, the burster tube, the tail fin assembly, and parachute. The fact that none of this exists belies the notion that an air-delivered chemical bomb was employed by the Syrian government against Khan Sheikhun.

Continuing along the lines of Aristotle’s exploration of the relationship between the potential and actual, the efficient cause represents the means by which the object is created. In the context of Khan Shiekhun, the issue (i.e., object) isn’t the physical weapon itself, but rather its manifestation on the ground in terms of cause and effect. Nothing symbolized this more than the disturbing images that emerged in the aftermath of the alleged chemical attack of civilian victims, many of them women and children. (It was these images that spurred President Trump into ordering the cruise missile attack on Shayrat air base.) These images were produced by the White Helmet organization as a byproduct of the emergency response that transpired in and around Khan Sheikhun on April 4.  It is this response, therefore, than can be said to constitute the efficient cause in any examination of potential to actuality regarding the allegations of the use of chemical weapons by the Syrian government there.

The White Helmets came into existence in the aftermath of the unrest that erupted in Syria after the Arab Spring in 2012. They say they are neutral, but they have used their now-global platform as a humanitarian rescue unit to promote anti-regime themes and to encourage outside intervention to remove the regime of Bashar al-Assad. By White Helmet’s own admission, it is well-resourced, trained and funded by western NGOs and governments, including USAID (U.S. Agency for International Development), which funded the group $23 million as of 2016. 

A UK-based company with strong links to the British Foreign Office, May Day Rescue, has largely managed the actual rescue aspects of the White Helmet’s work. Drawing on a budget of tens of millions of dollars donated by foreign governments, including the U.S. and UK, May Day Rescue oversees a comprehensive training program designed to bring graduates to the lowest standard—”light,” or Level One—for Urban Search and Rescue (USAR). Personnel and units trained to the “light” standard are able to conduct surface search and rescue operations—they are neither trained nor equipped to rescue entrapped victims. Teams trained to this standard are not qualified to perform operations in a hazardous environment (such as would exist in the presence of a nerve agent like Sarin).

The White Helmets have made their reputation through the dissemination of self-made videos ostensibly showing them in action inside Syria, rescuing civilians from bombed out structures, and providing life-saving emergency medical care. (It should be noted that the eponymously named Osca
          Senior Software Test Automation Engineer - Jurong Island   
Familiarity with commercial and open source test automation and test case management technologies such as JMeter, Robot Framework, Selenium, Watir or Hudson etc...
From Jobs Bank - Tue, 27 Jun 2017 10:03:13 GMT - View all Jurong Island jobs
          Automation Test Engineer for open source frameworks (Investment Banking) - Pasir Ris   
Experience in Continuous Integration Tool – Jenkins / Hudson. Optimum Solutions (Co....
From Jobs Bank - Wed, 28 Jun 2017 09:54:55 GMT - View all Pasir Ris jobs
          IT Developer/Architect - International Software systems - Maryland City, MD   
Proficiencies in DevOps*. This individual must be well versed in DevOps using industry standards and open source resources.*....
From Indeed - Thu, 29 Jun 2017 18:32:30 GMT - View all Maryland City, MD jobs
          Cresce OpenStreetMap, mappe libere Tre mln utenti nel mondo che partecipano in stile Wikipedia   
ROMA, 21 MAG – Mappe libere e gratuite per tutti, dati cartografici ‘open source’ utili alle Pubbliche Amministrazioni, aggiornamenti veloci grazie alla partecipazione degli utenti in stile Wikipedia che si sono rivelati utili in caso di soccorso in disastri naturali come quello di Haiti. Cresce la comunità di OpenStreetMap, un progetto nato nel 2004, che […]
          Windows Backup for Enterprises   

TheReal_Joe - Thanks for the input. I too have had many years with BE and never became comfortable with my Backups. Granted, I was backing up to tape, not disk. What I don't like about BE is they package the application for every scenario under the sun. And trying to tweak it to my specifications was a nightmare and continuous tweaking. My failure rate was unacceptable. I took classes in San Jose and the instructor and I have become good friends - and still, with his help, never was confident. Not to mention the licensing model sucks!

Tivoli, on the other hand is very intuitive and clean. My backups are consistent. I think I might stay with it, but in this economy I have been directed to trim 20% from my budget and I'm looking at Open Source or inexpensive solutions to get awayfrom licensing costs.

I will look into Yosemite Backup!


          Learning PySpark   
Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault...
          (IT) Java Developer - W2 only   

Location: New York, NY   

Software Guidance & Assistance, Inc., (SGA), is searching for a Java Developer (w2 only, no c2c) for a contract assignment with one of our premier clients in New York, NY. Responsibilities : Would participate in the development and implementation of internal and mission critical external distributed applications. Will be responsible for building distributed applications based on the established system architecture and will be involved in the entire project life cycle (requirements gathering, development, deployment and maintenance). Participate in the detailed technical design of applications with the System Architect and Technical Leads by reviewing UML diagrams and other technical documentation. Develop functions and application components in Java on Websphere platform. Integrate written components with frameworks and other common components. Make recommendations towards the development of new code or reuse of existing code. Develop documentation artifacts. Support testing (Unit, Functional, Performance, Access controls). Required Skills : J2EE Spring framework including Spring integration Oracle 11g WAS 8 IBM MQ Series UNIX Required Skills At least 5-7 years of relevant work experience in Java/J2EE distributed application Development. Includes strong experience with various J2EE components and services. At least 5 years of working experience with JAVA, J2EE, JavaScript, JDBC, HTML. 3-5 years working with Spring framework and good understanding of AOP concepts. Strong analysis and design skills, including requirements analysis, OO design patterns, and UML. Knowledge of common JEE patterns. General knowledge of open source frameworks. Experience with building and delivering mission critical, fault tolerant applications. Minimum two years of experience with Messaging (MQ/JMS) based Integration applications. Strong understanding of Application resiliency, security, scalability and general performance concepts. Keen understanding of performance issues and end to end debugging capabilities. Proven working knowledge of the JEE architecture. Proven working knowledge of WebSphere or Eclipse IDE. Strong knowledge of SQL, stored procedures (PL/SQL) and database fundamentals (Oracle 11g). Strong multi-tasking and collaboration skills. Strong verbal, written and presentation skills. Strong written, verbal and interpersonal skills. Ability to work independently and as part of a team. Ability to work on multiple assignments simultaneously and produce high quality products. Strong problem solving and analytical skills demonstrated by the ability to assimilate new information, understand complex topics and arrive at sound analysis and judgment. Preferred Skills : Sparx EA, Quartz, Hibernate/Orm, JAXB, Subversion, Maven, JUnit, Spring Batch framework, knowledge of Financial Services industry are a plus. SGA is a Certified Women's Business Enterprise (WBE) celebrating over thirty years of service to our national client base for both permanent placement and
 
Type: Contract
Location: New York, NY
Country: United States of America
Contact: george wellington
Advertiser: Software Guidance & Assistance
Reference: NT17-01189

          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          Business Game Changers Radio with Sarah Westall: Open Source Engineering Everything with Robert David Steele   
EpisodeNobel Peace Prize nominee, Robert David Steele, rejoins the program to discuss Open Source Everything Engineering.   Rest of episode Description coming Soon... 
          Business Game Changers Radio with Sarah Westall: Open Source Intelligence: Taking Back Government Secrecy   
EpisodeIntelligence agencies and black projects have come under much more scrutiny as whistleblowers such as Snowden have come forward with evidence showing mass surveillance and secrecy that is not only defying the intent of the constitution, but also betraying the trust of the American people. Additionally, trillions of dollars have been spent on black projects conducting missions all over the world without congress or the American people knowing what for. When we have had whistleblowers come for ...
          Dooble   
Dooble Web Browser to lekka i w pełni funkcjonalna przeglądarka internetowa. Mimo niskiej popularności, projekt wydaje się być dosyć wartościową alternatywą dla innego oprogramowania w tej kategorii tematycznej. Program opiera się na silniku WebKit oraz interfejsie Qt. Wyróżnia się szybkim działaniem oraz rozbudowanymi opcjami związanymi z bezpieczeństwem użytkownika w sieci. Dooble posiada standardowy interfejs użytkownika, który w niewielkim zakresie można zmodyfikować. W głównym oknie aplikacji znajdziemy panel z zawartością strony internetowej, pasek adresu, przyciski nawigacyjne oraz pasek wyszukiwania. W razie potrzeby możemy także skorzystać z trybu pełnoekranowego. Główne funkcje i cechy programu: - możliwość otwierania stron na kartach (wraz z możliwością dostosowania tego mechanizmu do potrzeb użytkownika), - wbudowany prosty menedżer plików, - eksplorator ciasteczek z wyszukiwarką, - baza ulubionych adresów, - skalowanie zawartości stron, - wbudowany klient FTP, - możliwość definiowania wyjątków dotyczących rozmaitych mechanizmów i technologii (HTTPS, przekierowania HTTP, funkcje JavaScript i inne), - blokowanie okien popup, - historia odwiedzanych adresów z podstawowymi funkcjami (automatyczne usuwanie starych wpisów, zapamiętywanie ostatnich kart, zmiana rozmiarów cache), - przeszukiwanie zawartości witryn, - możliwość zdefiniowania stron startowych oraz domyślnych adresów, - obsługa serwerów proxy dla różnych aktywności (przeglądanie, pobieranie, obsługa serwerów FTP), - szyfrowanie i zabezpieczanie danych sesji hasłem, - konsola błędów, - menedżer pobierania danych, - czyszczenie prywatnych danych użytkownika (wyjątki, historia, cache itd.), - możliwość korzystania z dodatków rozszerzających funkcjonalność aplikacji, - zapisywanie i drukowanie witryn, - konfiguracja domyślnego stylu stron internetowych, - zmiana wyglądu za pomocą motywów, - dobrze dobrane skróty klawiaturowe. Projekt jest rozwijany na zasadach Open Source (licencja GNU GPL). Na oficjalnej stronie przeglądarki znajdziemy kod źródłowy oraz wydania dla systemów Windows, GNU/Linux, Mac OS X.
          Reddit: Open source meta-charity?   

I'd like to monetarily support open source software. I'd like it if I could do it via re-occurring payments as well. Problem is, there are SO MANY projects out there, many of which I may not directly use, but may be very important libraries nonetheless. Openssl is just one example of this.

Where I live, there's this charity called United Way. You can donate to them, and they'll take your money and split it up amongst needy organizations that do good work in communities. My question is, is there anything like this in the open source software world? My googling has not found anything useful.

submitted by /u/thecraiggers
[link] [comments]
          Reddit: A list of companies that sponsor open source software   
submitted by /u/speckz
[link] [comments]
          Cloud Engineer - Microsoft Global Partner of the Year - Azure Open Source   

          Web Application Developer - Yahara Software - Madison, WI   
MongoDB or other NoSQL databases. We have an exciting opening for a full-stack, open source Web Application Developer (full-time) to join our innovative...
From Yahara Software - Mon, 15 May 2017 15:31:43 GMT - View all Madison, WI jobs
          Senior PHP Developer   
UITOUX Solutions Private Limited - Coimbatore, Tamil Nadu - experience in PHP 2. Understanding of open source projects like WooCommerce, Magento, Opencart ,Wordpress etc 3. Demonstrable knowledge of web...
          Senior Developer -Java & TIBCO (Contract Project) - softvision (tams edition) - Johns Creek, GA   
Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). We are looking for an exceptional Senior Developer (Java &amp; Tibco) to work with...
From softvision (tams edition) - Tue, 27 Jun 2017 23:47:53 GMT - View all Johns Creek, GA jobs
          Tibco Architect - GC/ US Citizen Only - BlueFusion INC - Johns Creek, GA   
Tibco Solutions Architect*. Experienced in rule engines with TIBCO BE or Open Source DROOLS (JBoss Rules). Johns Creek, GA*....
From Indeed - Tue, 27 Jun 2017 19:49:57 GMT - View all Johns Creek, GA jobs
          Understanding AngularJs: Scope, Filters, Directives, Events in HTML implementation   
AngularJS is an open source JavaScript framework that allows to move the presentation logic on the client side and thus separate it from the logic of the application that remains on the server. Start your First HTML page with AngularJS AngularJS is distributed as a JavaScript file, and can be added to a web page […]
          Content Specialist - OpenText ECM - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages. Accenture is a leading global professional services company, providing a...
From Accenture - Tue, 27 Jun 2017 02:50:27 GMT - View all Canada jobs
          Digital Technology Developer Sr Manager - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages Experience with project automation technology:....
From Accenture - Wed, 12 Apr 2017 10:04:31 GMT - View all Canada jobs
          WCM Senior Developer - Accenture - Canada   
Tomcat, WebSphere, Weblogic, Apache Http, Spring tcServer, Solr, open source packages. Experience working with relevant WCM or eCommerce packaged solutions such...
From Accenture - Fri, 07 Apr 2017 03:45:10 GMT - View all Canada jobs
          How to use locked apps on a rooted phone   

Google's SafetyNet wants to prevent rooting. The open source community behind the root tool Magisk plays a cat-and-mouse game with the app developers.

(This is a preview - click here to read the entire entry.)


          Assessing process, content, and politics in developing the global health sector strategy on sexually transmitted infections 2016–2021: Implementation opportunities for policymakers   
by Andy Seale, Nathalie Broutet, Manjulaa Narasimhan Andrew Seale and colleagues discuss the development of a global strategy to counter sexually transmitted infections. Tratto da: www.plos.org. All site content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license.
          Elimination of mother-to-child transmission of HIV and Syphilis (EMTCT): Process, progress, and program integration   
by Melanie Taylor, Lori Newman, Naoko Ishikawa, Maura Laverty, Chika Hayashi, Massimo Ghidinelli, Razia Pendse, Lali Khotenashvili, Shaffiq Essajee Melanie Taylor and colleagues discuss progress towards eliminating vertical transmission of HIV and syphilis. Tratto da: www.plos.org. All site content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license.
          Pathways and progress to enhanced global sexually transmitted infection surveillance   
by Melanie M. Taylor, Eline Korenromp, Teodora Wi Melanie Taylor and colleagues discuss global initiatives for surveillance of sexually transmitted diseases. Tratto da: www.plos.org. All site content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license.
          Effectiveness and equity of sugar-sweetened beverage taxation   
by Sanjay Basu, Kristine Madsen Sanjay Basu and Kristine Madsen discuss the effects of taxes on sugar-sweetened beverages in both Australia and Berkeley, USA. Tratto da: www.plos.org. All site content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license.
          Vaccination to prevent human papillomavirus infections: From promise to practice   
by Paul Bloem, Ikechukwu Ogbuanu In an essay, Paul Bloem and Ikechukwu Ogbuanu discuss the public health implications of HPV vaccination. Tratto da: www.plos.org. All site content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license.
          Get StatsJunky 3 Day Sale Starts Now   



StatsJunky




Would you rather pay $297.00 for a LIFETIME membership into this incredible affiliate marketing tool - or have to pay $699/year like everyone else...

Well it's your lucky day - because today StatsJunky is starting their big 3 Day BIRTHDAY BASH sale, and it's going to be the most beneficial sale for affiliate marketers that has happened in years.

CLICK HERE NOW TO GET STARTED ON YOUR LIFETIME MEMBERSHIP




Download StatsJunky




Designed for Pay-per-click and affiliate marketers - StatsJunky is a one-of-a-kind, revolutionary desktop application that will track ALL of your Affiliate and PPC stats including keyword level profit/loss, across multiple networks, securely and automatically.

If you're still not sure what StatsJunky is, here's what Shoe Money had to say about StatsJunky

"All serious affiliate marketers doing substantial profit margins always have 1 major thing in common and that is a central "Dashboard" of all of their earnings among all the networks. Most marketers have several accounts, some even with the same company."

"So what makes StatsJunky different from all the open source/hosted tracking applications?

Less Points Of Failure:
With hosted/open source tracking apps, you have to invest tons of time setting up your dashboard and generating tons of links for every campaign. Not only that, but running software like this can be risky. If the hosted server is down or delayed, this is going to drastically cut into your revenue - which is a HUGE point of failure.

Security:
If you use StatsJunky, all of the data stays on your computer. You don't have to worry about a hosted application having all of your campaigns/keywords.

Plugins:
All major affiliate networks are compatible with StatsJunky and if there's something you want - they will write a plugin for you.

Education:
They conduct DAILY webinars for StatsJunky users."

Buy StatsJunky for only $297 a huge discount off their normal $699/ year pricetag.

This sale ends Thursday at Midnight - so get in while you can.




StatsJunky
StatsJunky has absolutely made a marked improvement in multiple aspects of my business ... saving me time, highlighting strong affiliate programs, pinpointing the profitable keywords, providing instant access to my data via your cell phone, etc. Check it out Today!
          Killer PPC Stats Automation Software   



StatsJunky




If you've ever been frustrated with sub ids, exporting stats and manually creating profit loss reports at a keyword level ...

... than read this now ...

There's a smokin' hot software program called StatsJunky that automatically tracks your affiliate and PPC profit and losses all the way down to a keyword level.

You just go through StatsJunky's 5 minute wizard and you're done!

There's never been anything like it before and it's endorsed by Aymen from Arbitrage Conspiracy and Affiliate Guru ShoeMoney!

But Don't Rush Over to Buy it Just Yet!

Why?

Because this coming Tuesday StatsJunky is celebrating their birthday and are having a 3 day special - Where you can get StatsJunky for a FRACTION of the normal price!

Normally, this software is a steal at $79/month or $699.00/year.

But This Tuesday July 21st Starting At 8AM You Can Get LIFE TIME ACCESS to StatsJunky for only $297!

That's an outrageous discount and it's only good for these 3 days - starting Thursday at midnight the price goes back up to $699 PER YEAR!

That price will NEVER be offered again!!

Here's what Shoe Money had to say about StatsJunky

"All serious affiliate marketers doing substantial profit margins always have 1 major thing in common and that is a central "Dashboard" of all of their earnings among all the networks. Most marketers have several accounts, some even with the same company."

"So what makes StatsJunky different from all the open source/hosted tracking applications?

Less Points Of Failure:
With hosted/open source tracking apps, you have to invest tons of time setting up your dashboard and generating tons of links for every campaign. Not only that, but running software like this can be risky. If the hosted server is down or delayed, this is going to drastically cut into your revenue - which is a HUGE point of failure.

Security:
If you use StatsJunky, all of the data stays on your computer. You don't have to worry about a hosted application having all of your campaigns/keywords.

Plugins:
All major affiliate networks are compatible with StatsJunky and if there's something you want - they will write a plugin for you.

Education:
They conduct DAILY webinars for StatsJunky users."

Don't buy it yet. You'll wind up paying a lot more for it.

Then get ready for my email Tuesday at 8AM giving you the special link to get a lifetime membership into StatsJunky for only $297!

It's a killer deal and if you're an affiliate marketer you'd be a fool to miss out on it.




StatsJunky
StatsJunky has absolutely made a marked improvement in multiple aspects of my business ... saving me time, highlighting strong affiliate programs, pinpointing the profitable keywords, providing instant access to my data via your cell phone, etc. Check it out Today!
          Zephyr QA Leader - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Fri, 02 Jun 2017 22:17:46 GMT - View all Bangalore, Karnataka jobs
          Zephyr Test Automation and Test Tool Engineer - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Wed, 10 May 2017 10:24:48 GMT - View all Bangalore, Karnataka jobs
          Software Engineer – DroneCode Lead - Intel - Bangalore, Karnataka   
We have a long track record of contributing to and sponsoring a wide variety of open source projects, from the Linux kernel to the visualization stack to large...
From Intel - Sat, 18 Mar 2017 10:18:40 GMT - View all Bangalore, Karnataka jobs
          Using Joomla Video Tutorials To Learn Joomla Fast & Easy   
Joomla Video Tutorials are fast becoming the best way to learn Joomla and the open source software’s various modules, plugins and components. It is now labelled the world’s open source platform and is more popular than Drupal though not quite up there with WordPress. Joomla training online is surpassing the traditional offline Joomla courses offered [...]


          Reference Frame Development   

Background

For the last couple weeks I have been working on a reference frame design of sorts, so I wanted to put the idea, with some context, out into the community for feedback. My particular specialty is designing and building frames. BoxBotix being the design with the most time invested (man years by now). I also happen to run FlyingFoam with my wife, so we have a few CNC foam designs as well. This reference frame design is an effort to capture some of the lessons learned from our experience, as well as what I have gathered in this, and other communities. It is not original. It is just a refinement of other ideas for use in the context of developing software and hardware in the ArduPilot community. Think of it as Iris meets Open Source.

General Goal

Create a frame system to support hardware and software development that can be built in under an hour with basic hand tools using 3D printed parts and off the shelf components.

Specific Design Requirements

  • Released Open Source under CERN OHL
  • Minimize the number of tools required
  • Minimize the number of parts required
  • Minimize glue required
  • Custom parts should be 3D printable on a 6inx6inx6in desktop class printer
  • Off the shelf parts should be easy to source in your region (E.g. Imperial tubes in US; Metric tubes everywhere else)
  • Base design should scale to fit autopilot plus compute node up to Intel NUC size (4in x 4in)
  • Base design should be multi-modal capable(Copter, Plane, Tailsitter, QuadPlane, Rover)
  • Attachment of key components and payload(s) to frame should be easy and adjustable to allow for various configurations and easy field servicing

Initial Design

The above list is pretty close to what our design goals were with BoxBotix. The biggest difference is this will be an open frame with no dust or water resistance in the base frame. To get the speed and ease of assembly up we will be using mostly carbon fiber tube and limiting the number of 3D printed parts. 3D printing is slow and the parts can get heavy if you do not want to invest a bunch of time and effort in both the design and the tuning of your entire print process. Carbon fiber tubes are light, strong and easy to source these days, so lets prioritize their use for an easy win.

BoxBotix started with the concept of a BrainBox. It is pretty simple and it works. The BrainBox is just a box truss that allows various parts to be attached on the six faces of the box. BUT it has way too much print time and complexity for this kind of use and abuse. Pretty easy to simplify it in an open frame with some CF tubes instead of all the 3D printed parts needed in an enclosed design.

Picture of first version. Really strong. Too big, heavy and way too much print time required.

Annoying Details

  • Starting with Copter. It's an H frame. Likely use 15in props and 6S as that is what I have here. Pixracer and a compute node. Perhaps a Joule on Dev carrier. I figure if it can support that size and weight we can scale down easily enough. .
  • I am using threaded inserts. M3 all around for M3 fasteners. They are a bit specialty but pretty easy to source these days. They do require a soldering iron for insertion, but if you are building a robot you should have a soldering iron.
  • Threaded inserts require a min suggested clearance around all sides in order to keep pull-out strength. For our purposes on an M3 insert it is about 9mm clearance. That does not sound like much until you try to make something really small.
  • I started with 0.5in OD pultruded carbon fiber tubes to try to keep cost down (and I have a bunch here). However, we are using raw clamping force for easy assembly and pultruded tubes do not like to be clamped. We may have to use wrapped tubes, which are more expensive and can be harder to source. V2 will use wrapped tubes.
  • I went with a twin tube motor arm design to avoid needing to address a single motor arm tube wanting to twist in the mount. Makes the thing too big. Better compromise is likely to require a simple drill operation to run the 3mm screws through the tube as part of the box clamp. What needs to be done to make that doable is an easy way to keep all tubes jigged in position when drilling by hand. Working it for v2.
  • Orientation of a 3D printed part matters. A bunch. The Z layer will be weakest due to inter-layer adhesion (or lack of it). I have become pretty adept at slicing up a solid part into pieces to allow for max strength, min overhangs and best tolerances when printing. V2 still needs some work to make it quicker and easier to print.
  • I am designing around a "2 perimeter with 20% infill" assumption for printing. Might be able to tweak the design so it can be printed as a solid part OR CNC milled. That can come later.
  • I print using IC3D ABS on several Lulzbot Mini's. I do not like to change colors on the same part. I may print some smaller parts in black ABS, but if it's a large working part I will use natural ABS. Color dyes can change the properties of filament enough to screw with your print settings. 3D printing is enough of a multi-variable nightmare, so I stick mostly to IC3D natural ABS. You can print with whatever you like. I like ABS as it has good service temp, it's tough and I can acetone weld it.


v1 Print Platter. Too many parts and too much plastic.

The first version came off the printers about a week ago and I have been redesigning it in my head ever since. V1 was a tank as my first versions tend to be. V2 coming in the next week. Will drop an update once I get it done.


          (USA-MI-Ann Arbor) Software Developer   
h2Software Developer/h2 Ann Arbor, MI h4Job Overview/h4 We're looking for a senior full-stack software engineer who can take direction and run with it in building out new products and new product features. This role will be involved in our entire stack, and will take ownership of the services they build. You'll be working directly with our CTO, who built several startups (the last of which was acquired by Edmunds.com), and who authors and contributes to many open source projects, including as a member of the Ruby on Rails core team. h4What We're Looking For:/h4 A highly energetic "A" player who thrives in a fast-paced software development environment and has had a range of experience working full-stack on web-based applications, as well as back-end data processing applications and pipelines handling large amounts of textual, genomic, or similar data. Our ideal candidate will fully engage in the company by contributing great ideas, design, architecture and code development to help us build the next bio-informatics success stories. This is a full-time position working at our Ann Arbor headquarters. h4Position Responsibilities:/h4 liWork directly with CTO and VP of Product Strategy to develop new products product features for our current products./li liDev-Ops, Server Management, AWS and other VPS/li liAutomated Testing, Code Reviews, Team Reviews/li liStaging and Production Releases/li liDatabase Management, High Throughput Data Processing/li liPerform other duties as required to help the Genomenon team achieve its objectives/li /ul Required Skills and Background: liSoftware Languages / Frameworks / Libraries: Ruby and Python (or similar object-oriented languages), Ruby on Rails (or similar MVC frameworks)/li liDatabases and Queues: Postgres (or other SQL), queue and messaging systems, text indexing and search software/li liFront-end Technologies: HTML (obviously), CSS (including pre-processors like SCSS or LESS), JavaScript, DOM frameworks/libraries such as Angular.js and React/li liOS: Linux and OS X (Bash and command-line things)/li liVersion Control: Git (or Mercurial or similar distributed VCS)/li liDemonstrated ability to work in a team environment/li liStrong verbal and written communications skills, able to work independently/li liSelf motivator with strong work ethic with a "can-do" positive attitude/li liDesire to be a part of the fast-paced, high-energy entrepreneurial experience/li /ul Compensation Competitive base salary, health insurance, PTO and holidays, a great work environment, and equity participation opportunities If you are interested in this position, please send your resume with salary requirements to [careers @ genomenon.com]
          Forum Post: RE: CCS/EK-TM4C123GXL: Gmake error after making new project in CCS.   
I am still getting errors even after following steps told to follow earlier.Here is snapshot of console: **** Build of configuration Debug for project Pushbutton1 **** "C:\\ti\\ccsv6\\utils\\bin\\gmake" -k all 'Building file: ../main.c' 'Invoking: ARM Compiler' "C:/ti/ccsv6/tools/compiler/arm_15.12.3.LTS/bin/armcl" -mv7M4 --code_state=16 --float_support=FPv4SPD16 -me --include_path="C:/ti/ccsv6/tools/compiler/arm_15.12.3.LTS/include" -g --gcc --define=ccs="ccs" --define=PART_ TM4C123GH6PM --diag_wrap=off --diag_warning=225 --display_error_number --abi=eabi --preproc_with_compile --preproc_dependency="main.d" "../main.c" >> Compilation failure subdir_rules.mk:7: recipe for target 'main.obj' failed "../main.c", line 27: fatal error #1965: cannot open source file "inc/hw_memmap.h" 1 catastrophic error detected in the compilation of "../main.c". Compilation terminated. gmake: *** [main.obj] Error 1 'Building file: ../ tm4c123gh6pm _startup_ccs.c' 'Invoking: ARM Compiler' "C:/ti/ccsv6/tools/compiler/arm_15.12.3.LTS/bin/armcl" -mv7M4 --code_state=16 --float_support=FPv4SPD16 -me --include_path="C:/ti/ccsv6/tools/compiler/arm_15.12.3.LTS/include" -g --gcc --define=ccs="ccs" --define=PART_ TM4C123GH6PM --diag_wrap=off --diag_warning=225 --display_error_number --abi=eabi --preproc_with_compile --preproc_dependency=" tm4c123gh6pm _startup_ccs.d" "../ tm4c123gh6pm _startup_ccs.c" 'Finished building: ../ tm4c123gh6pm _startup_ccs.c' ' ' gmake: Target 'all' not remade because of errors. **** Build Finished ****
          TuxJam 59 – Revenge of the Jami   
The audio-friendly tones of newish recruit Lovebug join Kevie and mcnalu‘s Scottish harmonics to bring you a symphony* of free and open source and creative commons goodness. First they do their usual tour of distrowatch, then relate their experiences of Open Box based OBRevenge. They then move on to tour task management and diary apps … Continue reading "TuxJam 59 – Revenge of the Jami"
          JOVENS DE ESPOSENDE APRENDEM A PROGRAMAR NA PLATAFORMA MUNDIAL CODERDOJO   

CoderDojo Esposende foi um sucesso

Com uma sala cheia de crianças entre os 7 e os 17 anos. Foi assim que começou o CoderDojo Esposende no passado Sábado, 1 de Julho. A abertura da primeira aula teve a presença da Presidente da Associação de Cidadãos de Esposende, Maria Araújo, que dirigiu algumas palavras aos Jovens programadores, referindo que a codificação pode ser uma força para a mudança no mundo e fazendo ver que todos os programas que irão utilizar ao longo das aulas são Open Source. Ou seja, são gratuito para todos, pelo que podem utilizar todos os recursos livremente fora das sessões do CoderDojo.

1 (3).jpg

Esta primeira aula no Dojo Esposende realizou-se nas instalações da ACICE - Associação Comercial Industrial do Concelho de Esposende e foi dedicada ao Scratch, uma aplicação que permite aos jovens iniciar a programação em blocos, ajudando a entender como funciona a programação base.

O objectivo final na aula foi a construção de um pequeno jogo de labirinto em que os jovens programadores puderam ver o resultado final da programação.

Durante toda a sessão foi possível ver o entusiasmo por parte dos mais pequenos na aprendizagem Scratch. Durante as duas horas de aula puderam experimentar diferentes opções na programação, conviver com outros jovens e, sobretudo, partilhar ideias na construção de um mesmo objectivo.

Os jovens programadores são premiados com emblemas digitais, já que o CoderDojo premeia cada programador, seja pela sua presença na aula, seja pela apresentação de projectos que desenvolvem. Estes emblemas são reconhecidos pelos diferentes Dojos espalhados pelo mundo.

Importante referir que os encarregados de Educação são convidados a ficar e participar na aula, envolvendo de uma forma única pais e filhos.

Nas próximas aulas serão abordados temas como Arduíno e a linguagem de programação C+, em que os jovens programadores terão a oportunidade de construir e desenvolver objectos interactivos independentes.

Um aspecto importante do CoderDojo consiste em fomentar a criatividade e a diversão num ambiente social. O CoderDojo faz com que o desenvolvimento e a aprendizagem de programação constituam uma experiência divertida e positiva.

De recordar que CoderDojo e Raspberry uniram forças para que os CoderDojo sejam cada vez mais uma realidade. Em Esposende o CoderDojo surgiu pelas mãos da Associação de Cidadãos de Esposende, que aposta novamente em projetos inovadores para o concelho. Este Dojo integra a Comunidade internacional de clubes de programação.

2 (3).jpg


          Software Engineer - Bridgewater Associates - Westport, CT   
Aren’t a punch-the-clock coder — technology has always been pervasive in your life, from building drones to contributing to open source sites....
From Bridgewater Associates - Sun, 25 Jun 2017 06:53:27 GMT - View all Westport, CT jobs
          Software Developer - Bridgewater Associates - Westport, CT   
Aren’t a punch-the-clock coder — technology has always been pervasive in your life, from building drones to contributing to open source sites Possess high...
From Bridgewater Associates - Tue, 23 May 2017 10:21:21 GMT - View all Westport, CT jobs
          DIY Video Game Using Arduino (Arduboy Clone)   
There is an 8 bit, credit card sized gaming platform called Arduboy, that makes open source games easy to learn, share and play. You can enjoy 8-bit games made by others on this device, or you can make your own games. Since it is an open source project and uses arduino, I decided to make my own vers...
By: B45i

Continue Reading »
          Drawing the Nine-circuit Transition Labyrinth in the Sand.   
How to draw the nine-circuit Transition Labyrinth--a modern, original, open source, seven circuit classical/eleven circuit medieval/Chartres hybrid. Please see the extra steps at the end for more information on labyrinths, labyrinth patterns, and the history of how/why this hybrid pattern was devel...
By: Jamie Edmonds

Continue Reading »
          Leading in Web Development Company in UK|Codefingers Technology   
Codefingers Technology provides the best service for creating your website. It provides a good solution for web development starting from enterprise level business. It provides the solutions that meets the need of your business. Starting project with us,our team will analyse your business,objective and will be tailoring a successful strategy that will exceed all your expectations. What makes you to start your project with us? Our Work- We will make easy solutions to your work . You can navigate and access your site easily. Its our responsibility to built your site with the best web solutions. Our Strategy- We will develop your website with the best strategies Each and every page will be developed with its good functionality Using our best-fit strategies we will develop your website exactly as per your needs Design and Development- Provide you with a creative and unique designing. With the initial stage of designing,then development till the product been released we maintain the high quality standards. Fast and Commutative- We provide a quick service for the better web development solutions. We are very commutative,we give commitment to our clients for giving a better solutions. Services we provide in Web Development Magento Web Development: Magento is an open source e-commerce platform wriiten in PHP.It is a good platform to build an object oriented website. This may include platform choice, design, UI/UX, hosting, catalog structure. PHP Web Development Your site should be fast and secured. So developed using PHP is fast secure with professional web development framework. We develop and provide expert insights for building a PHP web development which delivers your business values. Joomla Web Development For the Joomla Web Development we provide,theme customisation and custom plug in. Considering the technology competence and proven methodologies,we deliver cost effective Joomla solutions Wordpress Web Development Your site should be high performance,secured and SEO friendly. So we provide a better and end-to-end solutions for developing wordpress sites Drupal Web Development The customisation of your site,with design theme and module will be developed here using the Drupal Development. Drupal development sites is good for start up organisations for developing their sites. HTML5 Web Development Designing is the most important part for creating your site attractive. Our team will be providing with the best unique design for your website. We will be structuring and presenting your content widely on your website.
          Enterprise Mobile App Solution   
You can avail Enterprise Mobile App Solution, an excellent way to remain competitive in the world. We are specialized to provide one of the best mobile app development, with enthusiastic and professional developers, who employ the latest technology and trend to beat the challenges you are facing in your business. Expert Area: *Open Source Development *Digital Marketing *UI/UX Design *Quality Assurance & Testing *Staff Augmentation *IT Infrastructure *Salesforce Development *Consumer Apps With our excellent service, we bind 1024 + Global clients, delivering 1700 + successful projects. For more information on app development and our services, Visit us at http://www.mobiloitte.com To avail our services, Reach us at sales@mobiloitte.com
          Why web Developers choose AngularJS for front-end Web Development   
Angular JS has been the reigning champion among the open source app development technologies. This highly-advanced framework has entirely changed the front-end development scenario with its numerous plug-ins and features. The app development process and app testing were never as simple as it is now with Angular JS. Thinkwik India Online services LLP is one of the best AngularJS Development Company. If you are interested in getting your own mobile and web app developed using Angular JS framework and dream to take your enterprise mobility to the next level then contact AngularJS Development Company for an informed investment. Address: C-404 Titanium Square, Near Thaltej Cross Road, S.G Highway, Thaltej, Ahmedabad, Gujarat 380054 Contact: +91 8460071113 ID: info@thinkwik.com URL: http://www.thinkwik.com/ http://www.thinkwik.com/angularjs-framework-development/
          Custom Android Application Development services- Reasons to go wi   
Android is an open source platform for mobile development, which is powered by Linux operating system. Newer and better android applications have been attracting more and more phone users around the world. If you too are having some ideas and want them to turn into mobile applications, it is your time to look forward to a reputable Android Application Development company. Thinkwik India Online Services LLP has well established Android Application Development Company, who have dedicated and experienced android developers. They build a robust and comprehensive android application as per client requirement. Address: C-404 Titanium Square, Near Thaltej Cross Road, S.G Highway, Thaltej, Ahmedabad, Gujarat 380054 Contact: +91 8460071113 ID: info@thinkwik.com URL: http://www.thinkwik.com/ http://www.thinkwik.com/android-application-development/
          High quality cost effective Web Design| E-commerce | Mobile Apps   
High quality cost effective Web Design| E-commerce | Mobile Apps | SEO You can sit back and focus on your core business whilst we take care of all your IT needs. If you are looking for a professional company who can be reliable, cost effective and provide long term support. Then your search has just ended, we are that partner you have been searching for :) What are the services we provide ? Complete Website Design/Re-design with full responsive design for all devices - All websites are developed using the best Content Management System in the world - SEO optimization for launch so that you website can start getting higher in the google search rankings - Attractive trendy UI design to create a professional brand presence for your business - Customization of functionalities like Booking appointments, Professional forms for contact us, Integration with all leading social media to autopost your blog posts - We can provide ongoing monthly Managed SEO support to promote your website to generate more business for you - We can help in Social Media Marketing strategy and execution - We can create Mobile Applications in iPhone, Android and Windows Turn key e-Commerce websites using leading products like Magento, Prestashop Private Social Networking website using open source products Ad delivery system to deliver ads to your suite of websites Classifieds (like gumtree) fully manageable ready made web solution Application support and Maintenance for your existing software.
          Responsive Website Designing and Development Company - GRSoft   
GR Soft Solution is one of the leading web designing and development services via PHP and dot net web programming language. We provide following sorts of PHP web development services which are follows:- 1) PHP web development 2) Corporate website development 3) PHP based CMS development 4) Custom PHP development 5) PHP/MySQL development 6) Web application development 7) Ecommerce development 8) Portal development solutions 9) Custom PHP Programming 10) Open Source CMS Solutions We ensure you that you will get assured results with us. Contact Details- GR Soft Solution UMA - 107 A, Ansal Plaza Corporate Suites Vaishali, Ghaziabad (Delhi, NCR) Website : www.grsoftsolution.com Contact Number: +91-99.58.008.250 Email ID: Sales@grsoftsolution.com
          Increasing demand of Magento Development!!   
At Vital Concept, We provide a Creative Magento development Services! Our Magento developer have a methodical knowledge and meticulous experience in php and html in order to set a customized website fully functional. Magento is an open source e-commerce application which is quite popular these days in technology world. It is an application which was designed using Zend framework. It uses one of the hi-end technologies of modern times i.e. entity attribute value database model for data storage. Magento Development is basically object-oriented programming (OOP). While, implementation of EAV model data storage module allows multiple websites and themes to run on the same layout and commands, same set of blocks and even the same database, which eventually makes this application ideal for e-commerce websites. Also, the latest version of Magento supports up to 60 languages which makes it more significant in international market. Ever since the e-commerce companies created a boom in the market and leading multinational firms entered in the online retail business, Magento Development experienced massive demand. A study reveals that approximately 150000 sites are known to use Magento as web application platform. Yes, Magento application is especially designed in such a way that it allows online editing very easily therefore it is completely apt for e-commerce websites that requires refurbishments in regular intervals. Magento involves model-view-controller (MVC) model which gives the system a unique versatility and adaptability. This VMC model enables- engagement of a layout file to control what is displayed on each view, employing of “blocks” that can be inserted into any view through the layout, usage of model re-write system and coding features. Today this application is used by almost all leading e-commerce website including e-bay and it seems like world can see more advanced versions of this application very soon.
          Hire a custom AngularJS Development Company �" Thinkwik   
AngularJS is an open source advanced stage for web applications presented by Google. AngularJS is an inclusive enclose of quick front-end improvement. Thinkwik utilizes profoundly particular AngularJS developers who are knowledgeable about Javascript and jQuery to make real-time applications easily in this framework. We additionally provide different AngluarJS application development services to customers in light of their needs. We are recognized as the best AngularJS Development Company for creating single page applications in a coordinated way. Address: C-404 Titanium Square, Near Thaltej Cross Road, S.G Highway, Thaltej, Ahmedabad, Gujarat 380054 Contact: +91 8460071113 ID: info@thinkwik.com URL: http://www.thinkwik.com/ http://www.thinkwik.com/angularjs-framework-development/
          Hadoop Online Training   
Hadoop and Bigdata Online Training is allocating by MKRInfotech from Hyderabad. We are providing Instructor lead training. They’ve experience in Real time and also we providing live interaction with instructor by 24*7. From our institute novice can expect services like Course Content, Live Recording Sessions, time flexibility, and live project with assistance, course materials along with Hospitality. Hadoop is open source software, it is developed with java framework for processing Large amount of Data sets. Main concepts in Course: •Introduction to Big Data and Analytics •Introduction to Hadoop •Reporting Tool •HDFS- Hadoop distributed File System •Hadoop Programming Languages •OVERVIEW HADOOP DEVELOPER Do not hesitate ask your query: Email: hr@mkrinfotech.com Phone: +91-9948382584 , 040-42036333 Web: www.mkrinfotech.com
          PHP Business Review Script �" i-Netsolution   
http://www.i-netsolution.com/item/business-review-lisitng-script/854143 In recent days, we have started to audit the customer attitude among their view and purchase. The fabulous report shows that the people in the ratio of prospecting the review in the online are increased day by day. Approximately eight out of ten customers were surveying the online review before purchasing the product. So Product Rating & Review Script is the upcoming highest potential business in an online industry. That’s why our developers has consumed much more time and contribute this unique script. We have co-ordinate more peculiar functionalities in this PHP Business Review Script. This script can used for Multipurpose and Revenue Management System. You can use this Business Reviews Script Websites for individual or multi-review of the product. Even this script can be serve us classified script. So this is also known as Multi-Purpose Review Management System. There also some peculiarities as cross platform, free cost registration, pricing plan management, progressive search option, advertisement management, Multi-Currency Support, Multi-Language Support, Google map integration, social media integration, blog management, account management, favorite list management. You can customize our whole site without any technical knowledge because we were providing fully open source code. We also abaft you not only till installing our script and also for decade years with warranty for technical support and five years free upgrade source code. We are also providing customer support for 24x7 via our proficient technical team To Contact our i-Netsolution Team Website URL: http://www.i-netsolution.com Mails us: info@i-netsolution.com Make a Call: India – (+91) – 9841300660 Make a Call: (USA) – (+1) 325 200 4515 Make a Call: (UK) – (+44) 203 290 5530
          AngularJS Development Company �" Features of AngularJS Development   
AngularJS is an open source web application framework. AngularJS applications can run on all major browsers and smart phones including Android and iOS based phones/tablets. If you are looking for AngularJS Development Company, who offers to develop customize Application according your requirement within the allocate time period at affordable price then your search is over here. Thinkwik India Online Services LLP as the expertise AngularJS Development Company.with highly skilled AngularJS developers can deliver highly customized web apps with AngularJS framework. Address: C-404 Titanium Square, Near Thaltej Cross Road, S.G Highway, Thaltej, Ahmedabad, Gujarat 380054 Contact: +91 8460071113 ID: info@thinkwik.com URL: http://www.thinkwik.com/
          Hire the Best Technology Partner for Startup Solutions- CODIANT   
A little horn to startup companies! Codiant software technologies bring to you original online experiences for your budding or planned ideas, no matter how big or small. We offer a comprehensive package of startup solutions for your dream company. We design engaging websites and mobile apps across most mobile platforms like iPhone, iPad, Android and the web that are perfectly tailored to meet your specifications irrespective of its simplicity and complexity. We are well versed with open source technologies like HTML5, PHP, Magento, NODEJS and AngularJS. Have an app project in Mind? Don’t be late, transform your ideas into reality now and discuss with us at sales@codiant.com.
          PHP Directory Listing Script, Business Listing Script   
http://www.i-netsolution.com/item/php-directory-script/891262 PHP Business Directory Script is the most progressive Business Listing Script. You can conveniently power a local city guide, local Business website using our script. This script has a peculiarity of persists of accessible keywords to get a progressive search result even for small businesses via amazing search engine. Our technical team establishes some additional features like responsive design, listing management, content management, PayPal payment integration, Advanced CMS management, smart way categorization, daily review updating system, Google map integration, listing management, blog customization, optimized service detail, social media connection, amazing membership plans, newsletter and Google optimization. This PHP Directory Listing Script is categorized which preferred to arrange in alphabetically by business name and in which advertising is sold. This script has the unanticipated revenue benefits that will make you to suggest this script for others. You can check out the activity logs of the users. If you have technical knowledge, you can customize your site because we are providing a cent percent fully open source code. We are also promise you for 10 years technical support and five years free upgrade. To Contact our i-Netsolution Team Website URL: http://www.i-netsolution.com Make a Call: India – (+91) – 9841300660 Make a Call: (USA) – (+1) 325 200 4515 Make a Call: (UK) – (+44) 203 290 5530
          Responsive Ecommerce web design Company   
Welcome to Bangalore Web Guru: Crafted Responsive Web Design For Your Intelligent Business Bangalore Web Guru is found by a team of highly experienced Web designers, Web developers and Digital marketing experts, located in Bangalore,India.At our Responsive website design Company,Over the last 7 years, our 50 plus team of savvy and experienced web experts, delivers a competitive advantage by being able to tailor flexible Responsive Web Design solutions to meet specific partner's needs. Every step of our process is thoroughly instilled with dedication and attention to detail with the sole focus of doing it right. We Also provide a wide range of eCommerce solutions whether it is an open source software like Magento or a custom built eCommerce website developed specifically for you. Your E-commerce website will automatically sell your products, notify you of new sales, process the payments, track inventory and communicate with your customers and thus improves your business. You will be able to update everything on the website yourself. With built-in MS software, you can add an unlimited number of products, images, pages and links quickly and easily Please Visit for our Successful clients here: www.bangalorewebguru.in/Portfolio.html Why To Switch To Responsive Website Design? With mobile continuing to increase amazingly, your website need to offer users a seamless experience across all devices. A responsive design is a website that can be viewed on any screen, basically the responsive website will stretch or minimize to fit any type of device, from a small smartphone device screen right up to a projector screen! And Other Responsive Services: * Responsive Wordpress Web Design * Responsive Joomla Web Design * Responsive Magento Web Design * Ecommerce Web Development * CMS Website Development * Digital Marketing Services Interested in a responsive web design for your project? Contact us: http://www.bangalorewebguru.co.in/ Skype : zinavotechnologies Mail ID : sales@zinavo.com Call us IND : +91 8296446686/91-80-41644089 UK: +44 203 289 8924
          Realty Classifieds Script|Real Estate Listings Script   
http://www.phprealestatescript.org/php-realestate-script.html PHP Real Estate Script is a Real Estate Websites Script, urbanized in PHP and Mysql, mainly designed for real estate companies to promote their properties. Its user-friendly design enables you to be the owner of a Real Estate Listing Script. This Php Property Portal Script has the most essential adaptability and simplicity necessary for any Realty Classifieds Scripts or Property Owner Websites. This Open Source Real Estate Agency Script comes with quality-rich, simple to use interface with secluded admin area to make, edit, and delete new listings with multiple images. Without any technical knowledge, anyone can operate and sustain our Makaan clone script. The lighter design optimizes the web page display while screening of multiple properties and stage searches.We provided Realty Classifieds Script at affordable price. Contact Our Support Team Website URL:http://www.phprealestatescript.org/ Make a Call: India – (+ 91) –9841300660 Make a Call: (USA) – (+1) 325 200 4515 Make a Call: (UK) – (+44) 203 290 5530
          Real Estate Script|PHP Real Estate Script   
http://www.phprealestatescript.org PHP Real Estate Script is a complete software solution that will save your money, time and effort. And most importantly, it will boost your online real estate business and assurance its success! Our Real Estate Script which allows an individual owners and real estate agencies to easily publish, manage and organize properties for sale and/or rent. We followed the latest standards in real estate market and you control the way your site will look, add your own logo or create your own site template. PHP Real Estate Script is developed by most powerful open source languages php, mysql. this real estate script having more features SEO-friendly URL, feasible user interface with secure admin panel to create, edit, and delete new property listings with different images. For Real Estate Agents, our PHP real estate script provides a clean platform. Our readymade real estate software has the all essential features for property owner websites. This script is lightweight fast it may reduce time consumption when viewing multiple property. Readymade RealEstate Script is real estate Property listing software that allows you to create your own real estate website. It is scalable php software enough to use for a variety of business models. Readymade Real Estate Script have rich functions for private sellers, buyers and real estate agents to list their properties for sale or rent, search the database and show featured ads. Private sellers can manage their ads through their personal admin space. This Realtors Script allows you to automate and simplify the realty business process. Contact Our Support Team Website URL:http://www.phprealestatescript.org Make a Call: India – (+ 91) –9841300660 Make a Call: (USA) – (+1) 325 200 4515 Make a Call: (UK) – (+44) 203 290 5530
          Why Developers Prefer Node JS to Develop Web Applications?   
Node JS is an enormously popular open source platform which is growing in popularity each and every day. Thinkwik India Services Online LLP is a trusted Node JS development company offering all sort of web services. At Thinkwik, developers are highly experienced and well versed in an array of different programming languages and they will help out you to decide what is the most practical and cost-effective approach for your project. Address: C-404 Titanium Square, Near Thaltej Cross Road, S.G Highway, Thaltej, Ahmedabad, Gujarat 380054 Contact: +91 8460071113 ID: info@thinkwik.com URL: http://www.thinkwik.com/
          Podcast for Teachers, Techpod, Vol 2 Ep 97 7/23/07 NECC Wrap Up Part II: Media Literacy, New Resource Announcement (re: Classroom Robotics), the Skinny on Integrating Art and Technology in All Subjects, Otter Box Review and More, Email: podcastforteachers@gmail.com   
Don't miss an awesome episode from Kathy and Mark as they not only continue to give you the run down on their scouting of resources at NECC, but provide terrific summer PD for the PFT family. Mark and Kathy run down items that reached out from behind the vendor booths and grabbed their attention: the good, the sad, and the mediocre. JPG? GIF? The rules are made to be probed as all teachers become art integrating teachers and all classrooms transform into art rooms through the use of user friendly, free tech resources. And what about that new blog, CLASSROOM ROBOTICS? Media Literacy, New Resource Announcement (re: Classroom Robotics), the Skinny on Integrating Art and Technology in All Subjects, Otter Box Review and More. From Google's Picasa, to open source GIMP, ArtRage, SRA Science Photo Library, SRA Tech Knowledge, Quark Xpress, and the National Center for Women in Information Technology to a new robotics blog (classroomrobotics.blogspot.com), don't miss the amazing resources discussed and packed into this episode. The Classroom Robotics BOOK by Mark Gura and Dr. King is published by Information Age Publishing. PFT recommends ordering this book directly from Information Age for the best service! www.infoagepub.com. Please take the PFT survey and you will have a chance to win a handheld digital recorder! Time is running out. http://www.retc.fordham.edu/pftdata/pftsurvey.asp OR at our website click SURVEY. Have YOU left your mark on the PFT Frappr map? Tune in to your favorite weekly podcast with More Ed Tech You Can Use. Check the www.podcastforteachers.org website for all resources, articles and links at http://www.podcastforteachers.org and resources Email podcastforteachers@gmail.com PFT's name and content is developed, produced and copyrighted (p) by Fordham University, King and Gura, 2006-2007. All rights reserved. Our sponsors include Fordham's master's in adult education and HRD online degree program www.fordham.edu/gse/aded, www.TransformationEducation.com, Libsyn.com and Learningtimes.net
          Podcast for Teachers, Techpod, Vol 2 Ep 69 12/25/06 Holiday Treats and Tech Conversations Warm Up! Open Source and Open Tech From theClassroom to the Library and the Stars.   
Holiday Astronomical Treat, Report from DRELC, Update on Assistive Technology, Shout out to PFT Listener Podcast, School Library Journal Resource and podcast. Open Source having its day in school? You can call it Open Technology we have called Open Source and have plenty ready for you! Astronomical treat and podcast on astronomy wonders introduced. Mark shares report on conversations in Arizona with Ed Tech leaders. Gcast, wikis and podcasts as web 2.0 realities for imaginations. Join in the discussion! Great PFT SPOTLIGHT 141 Podcast example of student involvement in podcasting and also teacher guiding use of podcasts for instructional applications. School Library Journal provides insight and resources for technology use. Appresso technologies includes a closed captioning application. Listeners can enter for 2nd chance to win by Jan 18, 2007 5pm EST deadline- SPOTLIGHT Send your unique ideas of how you use ed tech with teaching and learning with your students. We want to share it with PFT listeners and teachers more.Submission form and details are at www.podcastforteachers.org/pftspotlight.html Chance to win a digital voice recorder/mp3 player! Greatest news the full details of the 2nd Annual PFT Best Podcasting Education Awards were discussed today! Deadline 5/1/07 winners' program or school will also receive a $100 prize and a certificate of award. Web page has details PFTbestpodcastawards.html Check the www.podcastforteachers.org website for all resources, articles and links at http://www.podcastforteachers.org/ResourcesbyPodcast.html Let us know your take on the news and resources Email podcastforteachers@gmail.com Support PFT: You can buy your own Fordham RETC MP3 player/voice recorder combo for yourself or students and colleagues- bulk pricing available too visit http://www.retc.fordham.edu and for books http://www.infoagepub.com. Past episodes through our website portal www.podcastforteachers.org PFT's name and content is developed, produced and copyrighted (p) by Fordham RETC, Center for Professional Development, King and Gura, 2006."More Ed Tech You Can Use Today and Tomorrow from Podcast for Teachers(sm)!"All rights reserved.
          Podcast for Teachers, Techpod, Vol 2 Ep 68 12/18/06 New Voices on the Net: Wikipedia Speaks, Ask A Ninja, Negraponte's Wiki and Google in the Classroom.   
New Voices on the Net: Wikipedia Speaks, Ask A Ninja, Negraponte's Wiki and Google in the Classroom. New voices in the conversations on the Internet- major publuisher purchased by ed tech media company (Riverdeep), Google and Global SchoolNet build worldwide conversation and contributions to classroom materials about global warming. A tongue in cheek defnition of podcasting from an unusual source: Ask a Ninja helps us understand younger perspectives of new media. Broader understandings of possiblities for the Negroponte project (NY Times). Open conversations of development, innovation and programming- One hundred dollar laoptop prohject revealed in wiki by the founder. Discussing the possibilities for the Open Source movement to gain new exposure and understanding through this project. Wikipedia not only text contributions, but audio articles and PFT Spotlight listener winner Mrs. Gamache and Kinderplay. Listeners can enter for 2nd chance to win by Jan 18, 2007 5pm EST deadline- SPOTLIGHT Send your unique ideas of how you use ed tech with teaching and learning with your students. We want to share it with PFT listeners and teachers more.Submission form and details are at www.podcastforteachers.org/pftspotlight.html Chance to win a digital voice recorder/mp3 player! Greatest news the full details of the 2nd Annual PFT Best Podcasting Education Awards were discussed today! Deadline 5/1/07 winners' program or school will also receive a $100 prize and a certificate of award. Web page has details PFTbestpodcastawards.html Check the www.podcastforteachers.org website for all resources, articles and links at http://www.podcastforteachers.org/ResourcesbyPodcast.html Let us know your take on the news and resources Email podcastforteachers@gmail.com Support PFT: You can buy your own Fordham RETC MP3 player/voice recorder combo for yourself or students and colleagues- bulk pricing available too visit http://www.bxmedia.net/bxm0037-full.htm Past episodes through our website portal www.podcastforteachers.org PFT's name and content is developed, produced and copyrighted (p) by Fordham RETC, Center for Professional Development, King and Gura, 2006."More Ed Tech You Can Use Today and Tomorrow from Podcast for Teachers(sm)!"All rights reserved.
          Mac (Apple) H2testw Alternative Program Called F3 By Michel Machado   
F3 by Michel Machado is an open source Linux software to test flash memory capacity that now runs on Mac computers!. H2testw does not run on Macs (Apple) computers as it is Windows based. This has been serious problem, now solved! Michel first developed the programme to run on Linux. Starting with Version 2.0, F3 […]
          Linux H2testw Alternative Program Called F3 By Michel Machado   
F3 by Michel Machado is an open source Linux software to test flash memory capacity. H2testw does nor run in Linux. There are two programmes, one to read and one to write files to the item being tested. If you are a Linux user and need to test flash memory cards, usb flash drives and […]
          MySql Online Training Institute   
KITS Online Training Institute provides best MySQL Online Training by well trained and certified trainers. MySQL DBA is an open source relational database that is free for most uses. It has wide platform support and can be quickly deployed. MySQL DBAhas become a standard for small and medium sized organizations, as it is affordable, reliable, and fast. We are delighted to be one of the best leading IT online training with best experienced IT professionals and skilled resources. We have been offering courses to consultants, companies so that they can meet all the challenges in their respective technologies. Training given by KITS are of high quality and we also provide cost effective learning.
          Hire Angular.Js Programmers   
Orion Infosolutions is certified and well-established AngularJs web development company bringing you the best solutions and support for this amazing open source web application framework-maintained by Google. With AngularJS, we make web development environment extraordinarily expressive, readable, and quick to develop. We offer 100% satisfaction by using standard methods, code reviews, integration, and frequent testing in AngularJS projects. We offer you active consultancy over ever-changing technologies. We offer impeccable and real-time supports for AngularJS development. Contact Us Now: Orion Infosolutions Skype: orion.infosolutions Email: info@orioninfosolutions.com Phone: +INDIA +91-8302758817,(What's App ) USA +1 646-503-7753
          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          Customised Web Based Open Source MLM Software At Affordable Rate   
Binary, Level, Matrix, Generation, Board Plan, Forced Matrix ,Uni level and all other type of MLM Software at Reasonable Rates Expertise in all type of MLM Software, Served More Than 500 Clients Across Globe, 15 years of experience in Developing MLM Software and Providing Services, Team of Expert professional In Open Source Development Website :- www.mlm4india.com Email :- sales@tornadosoftware.net Whatsapp Call : +91 9099057617 Skype Id :- inteligentstar1@outlook.com Contact :- +91 9099057617.
          Hire Open Source CMS Web Developers   
Are you looking for custom CMS development services? Hire custom CMS developers and programmers to provides great quality web CMS from Mobiweb technologies. Our custom CMS development solutions facilitate different business possibilities. With our strong expertise in web CMS development, we offer custom CMS software development on the platforms including Drupal, Joomla, WordPress and Magento etc.
          Tablacus Explorer 17.7.2   
Tablacus Explorer is a tabbed file manager with Add-on support. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 518 KB ]
          Custom Software & Web Development Company UK   
TatvaSoft UK is a preeminent Information Technology Consulting & Software Development Company. Since 2001, we have been providing custom software development services on diverse technology platforms, like Microsoft, SharePoint, Java, PHP, Open Source, BI and Mobile. We serve clients in UK as well as entire Europe. We also have offices in US, Canada, Australia, South-Africa and Middle-East and are equipped to develop solutions for varied industries and domains. We collaborate with enterprises to design, develop and implement complete IT solutions. Our domain expertise, technical competence and different procedures are in sync with CMMI Level 3 and MicroSoft Gold certification. By leveraging our global and flexible delivery models, agile and scrum methodologies we have given a competitive edge to many organizations.
          Magento Development Company   
Magento is an open source web application that is based on e-commerce. Ask Online Solutions are having an experienced and dedicated team of Magento Certified Developers, who possess an excellent hold upon all Magento Development Services. We ensure the best implementation of Magento with best practices and usability for the ultimate conversion. We are consistent in developing applications with different platforms by ensuring business flexibility and agility. We give the foremost priority to customer-centric solutions for their required business needs. We enhance our development with Magento with best code standards and practices by ensuring high performance and also within budget. Ask Online Solutions' professionals impart an easy navigation thus making it quite easier and friendly to find out their most desirable products. Through Magento Development Services we make available for easy payment gateways that will benefit your customer.
          Linux career development USA & UK   
Take a leap towards your DREAM JOB with itsprings.com Linux and Openstack online training. Get full assistance till job placement. The in demand online IT courses for a great career (Contact UK +44-8000588147 USA +1 646-767-3754) Companies all over the world are rapidly migrating to open source systems. Most of the IT infra is migrating to Linux and our online Linux course is designed for meeting this demand. Browse through www.itsprings.com and choose the course that suits you. You can also call to speak to a counselor or drop us a message. We are here to train you in the best and latest.
          PHP and MySQL, PHP and MySQL Training, PHP & MySQL Course   
PHP is highly developed and widely used programming language used in the development of web applications. It is an open source language. MySQL is a standard and popular language for database management software. PHP & MySQL combined is used to create dynamic data driven web applications starting from CMS to social networks. This course starts from fundamental level to advanced one subsequently through intermediate level. Course Outline: Introduction Installation Programming Fundamentals Forms Processing Sending Emails Server Variables and Session Handling Object oriented programming MySQL PDO Predefined Functions Deign patterns Web application security Building a CMS
          Web Application Developer - Yahara Software - Madison, WI   
MongoDB or other NoSQL databases. We have an exciting opening for a full-stack, open source Web Application Developer (full-time) to join our innovative...
From Yahara Software - Mon, 15 May 2017 15:31:43 GMT - View all Madison, WI jobs
          Full Stack Rails Developer - Domain7 - British Columbia   
We lean heavily toward open source tech, and our framework choice of the past couple years has been Ruby on Rails , but we’ve also worked with the LAMP stack....
From Domain7 - Mon, 17 Apr 2017 17:09:19 GMT - View all British Columbia jobs
          What is Kodi and What Can I Do With It – Review   

Kodi is an open source media player that offers an abundance of streaming options. Packed with hundreds upon hundreds of add-ons, you’ll find something for everyone. It offers complete customization […]

The post What is Kodi and What Can I Do With It – Review appeared first on Latest updates on satellite world in Africa.


          Chris Lamb: Free software activities in June 2017   

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Support Debian "buster". (commit)
    • Set TRAVIS=true environment variable when running autopkgtests. (#45)
  • Updated the documentation in django-slack, my library to easily post messages to the Slack group-messaging utility to link to Slack's own message formatting documentation. (#66)
  • Added "buster" support to local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool. (commit)

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source. Multiple third-parties then can come to a consensus on whether a build was compromised or not.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Chaired our monthly IRC meeting. (Summary, logs, etc.)
  • Presented at Hong Kong Open Source Conference 2017.
  • Presented at LinuxCon China.
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
    • cracklib2: Ensuring /var/cache/cracklib/src-dicts are reproducible. (#865623)
    • fontconfig: Ensuring the cache files are reproducible. (#864082)
    • nfstrace: Make the PDF footers reproducible. (#865751)
  • Submitted 6 patches to fix specific reproducibility issues in cd-hit, janus, qmidinet, singularity-container, tigervnc & xabacus.
  • Submitted a wishlist request to the TeX mailing list to ensure that PDF files are reproducible even if generated from a difficult path after identifying underlying cause. (Thread)
  • Categorised a large number of packages and issues in the Reproducible Builds notes.git repository.
  • Worked on publishing our weekly reports. (#110, #111, #112 & #113)
  • Updated our website with 13 missing talks (e291180), updated the metadata for some existing talks (650a201) and added OpenEmbedded to the projects page (12dfcf0).

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add libarchive-cpio-perl with the !nocheck build profile. (01e408e)
  • Add dpkg-dev dependency build profile. (f998bbe)


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list. However, I:


Debian LTS


This month I have been paid to work 16 hours hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 974-1 fixing a command injection vulnerability in picocom, a dumb-terminal emulation program.
  • Issued DLA 972-1 which patches a double-free vulnerability in the openldap LDAP server.
  • Issued DLA 976-1 which corrects a buffer over-read vulnerability in the yodl ("Your Own Document Language") document processor.
  • Issued DLA 985-1 to address a vulnerability in libsndfile (a library for reading/writing audio files) where a specially-crafted AIFF file could result in an out-of-bounds memory read.
  • Issued DLA 990-1 to fix an infinite loop vulnerability in the expat, an XML parsing library.
  • Issued DLA 999-1 for the openvpn VPN server — if clients used a HTTP proxy with NTLM authentication, a man-in-the-middle attacker could cause the client to crash or disclose stack memory that was likely to contain the proxy password.

Uploads

  • bfs (1.0.2-1) — New upstream release, add basic/smoke autopkgtests.
  • installation-birthday (5) — Add some basic autopkgtest smoke tests and correct the Vcs-{Git,Browser} headers.
  • python-django:
    • 1:1.11.2-1 — New upstream minor release & backport an upstream patch to prevent a test failure if the source is not writable. (#816435)
    • 1:1.11.2-2 — Upload to unstable, use !nocheck profile for build dependencies that are only required for tests and various packaging updates.

I also made the following non-maintainer uploads (NMUs):

  • kluppe (0.6.20-1.1) — Fix segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863421)
  • porg (2:0.10-1.1) — Fix broken LD_PRELOAD path for libporg-log.so. (#863495)
  • ganeti-instance-debootstrap (0.16-2.1) — Fix "illegal option for fgrep" error by using "--" to escape the search needle. (#864025)
  • pavuk (0.9.35-6.1) — Fix segmentation fault when opening the "Limitations" window due to pointer truncation in src/gtkmulticol.[ch]. (#863492)
  • timemachine (0.3.3-2.1) — Fix two segmentation faults in src/gtkmeter.c and gtkmeterscale.c caused by passing a truncated pointers using guint instead of a GtkType. (#863420)
  • jackeq (0.5.9-2.1) — Fix another segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863416)

Debian bugs filed

  • debhelper: Don't run dh_installdocs if nodoc is specified in DEB_BUILD_PROFILES? (#865869)
  • python-blessed: Non-determistically FTBFS due to unreliable timing in tests. (#864337)
  • apt: Please print a better error message if zero certificates are loaded from the system CA store. (#866377)


          Web development by bnhtraffic   
Hey, we create a new advertising bidding system and we want a ready CRM with few changes per our needs. Please only if you build or have serious experience in open sources CRM etc contact us. Thanks! (Budget: $1500 - $3000 USD, Jobs: Blog Install, Graphic Design, PHP, Website Design)
          Web Application Developer - Yahara Software - Madison, WI   
MongoDB or other NoSQL databases. We have an exciting opening for a full-stack, open source Web Application Developer (full-time) to join our innovative...
From Yahara Software - Mon, 15 May 2017 15:31:43 GMT - View all Madison, WI jobs
          Status.im Raises 300000 ETH in Three Hours   

Share with: Status.im, an open source messaging platform that offers mobile browserRead more ... source: The Blockchain Advertise on the Bitcoin News

The post Status.im Raises 300000 ETH in Three Hours appeared first on The Bitcoin News - Leading Bitcoin and Crypto News since 2012.


          Open source .net license tool, EasyLicense !   
EasyLicense is an open-source license tool for .NET applications.
          Stagiaire Animateur de Communauté Open Source H/F   
Entreprise : Euro Information est la filiale Informatique du groupe de bancassurance Crédit Mutuel &hellip;
          Introducing Traffic Light   

I'm a big fan of continuous integration. I've been using it since I was first introduced to it by Mike Swanson, when he put his Ambient Orb up in our office. 8 years later, and I'm still using continuous integration on every project I'm on, including any of my personal projects.

But one thing that I didn't have was a good way to monitor the builds. CCTray was OK, but it wasn't the most visible thing. BigVisibleCruise was nice, but screen real estate is at a premium. So I started looking for something I could use. The Ambient Orb was both expensive and out of stock at the same time, so it was out, and a real traffic light was a bit more room than I wanted to use. Eventually, I found a miniature traffic light from Delcom that seemed perfect.

Traffic LightI threw together a quick and dirty application that did nothing but check some XML from CruiseControl.NET, parse out the build status, and change which light was lit up. It was a complete hack, but it worked. When we switched to to Hudson (and then Jenkins), it continued to work because Jenkins offers a CruiseControl.NET compatible output. But then we switched on authentication, and it stopped working. And I left it that way.

Just recently, we had a situation where the build broke for a couple of days and no one noticed it. I didn't notice because the tool I was using to monitor the build wasn't visible enough. So I pulled out the old code for the traffic light monitor and got it working with authentication.

And I kept going. I added a user interface for adding and editing projects. I added a screen for monitoring the build so you don't have to have a real traffic light (pictured right). I added a system tray icon that shows the current state of builds. I added balloon tool tips when builds happen. And I came up with a bunch of ideas I'd like to do with it.

And then I made it open source.

It's still in it's infancy, and set is a little non-obvious (but getting better!), but it works to monitor builds. Now when the build breaks, a giant red light shines in my office - which I definitely can't miss! If you've been looking for a way to monitor your CI server, then you should take a look at Traffic Light.

I've set up a public Trello board that I'll be using to track features and bugs. There's two cards designated for anyone to contribute new ideas or to submit bugs. Details for the board are available on this card. And of course, if you have a feature you want, I'll accept pull requests! I am also attempting to get the project set up on CodeBetter's TeamCity CI server. I submitted my request about a week ago, but haven't heard anything yet. I don't know if I'll ever hear back or not, but if I do, I'll get links out to that as well (and probably include that as a default project in the application).

I don't expect this application to gain a ton of traction, but it's a useful utility and could be a good learning experience about running an open source project, so I'm excited about it.

Discuss this post


          Using Eventing to Decouple Applications   

I've been writing an application to monitor Jenkins and update a Delcom traffic light with the current build status. I started out with a straightforward approach and it worked well. At first. But as I decided to expand the application to update icons and show a separate window with the current build status, I quickly realized that this wasn't going to be maintainable long term.

Here's what I was doing to update the build status:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: delcomService.UpdateBuildStatusTo(buildStatus);

As I started looking at adding other build monitors, my code was going to start to look like this:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: delcomService.UpdateBuildStatusTo(buildStatus);
   4: UpdateIconFor(buildStatus);
   5: if (monitorForm != null) {
   6:   monitorForm.SetBuildStatusTo(buildStatus);
   7: }

Notice that the code that's determining the build status is also now responsible for updating the build indicator. And as I added more and more build indicators, this code would have to be touched over and over.

So, rather than continue down the path and not really liking the direction the code was headed, I decided to add eventing.

Before we get to how the code changes, let's look at what we have to add. First, we need an event, which is really just a class:

   1: public class BuildStatusChanged : IEvent
   2: {
   3:     public BuildStatus Status { get; private set; }
   4:     public BuildStatusChanged(BuildStatus status)
   5:      {
   6:          Status = status;
   7:      }
   8: }

The infrastructure to handle it is pretty straightforward. Just one class:

   1: public static class Eventing
   2: {
   3:     private static readonly IDictionary<Type, List<Delegate>> actions = new Dictionary<Type, List<Delegate>>();
   4:  
   5:     public static void Register<T>(Action<T> callback) where T : IEvent
   6:     {
   7:         if (!actions.ContainsKey(typeof(T)))
   8:         {
   9:             actions.Add(typeof(T), new List<Delegate>());
  10:         }
  11:         actions[typeof(T)].Add(callback);
  12:     }
  13:  
  14:     public static void Unregister<T>(Action<T> callback) where T : IEvent
  15:     {
  16:         if (actions.ContainsKey(typeof(T)))
  17:         {
  18:             var item = actions[typeof (T)].FirstOrDefault(i => i == (Delegate) callback);
  19:             if (item != null)
  20:             {
  21:                 actions[typeof (T)].Remove(item);
  22:             }
  23:         }
  24:     }
  25:  
  26:     public static void Raise<T>(T args) where T : IEvent
  27:     {
  28:         if (actions.ContainsKey(typeof(T)))
  29:         {
  30:             actions[typeof(T)].ForEach(a => a.DynamicInvoke(args));
  31:         }
  32:     }
  33: }

When a class wants to know about an event, it just calls Eventing.Register() passing in a callback. So the DelcomService looks like this now:

   1: public class DelcomService 
   2: {
   3:     public DelcomService() 
   4:     {
   5:         Eventing.Register(ChangeBuildStatus);
   6:     }
   7:     
   8:     public void ChangeBuildStatus(BuildStatusChanged args) 
   9:     {
  10:         // turn the traffic light on
  11:     }
  12: }

This same type of code would then be added to any forms that need to know about the current build status, as well as the main application thread that is managing the icon for the application.

As for the code that is checking the build status? It changes slightly:

   1: projects.Each(p => p.CurrentStatus = projectStatusService.CheckStatus(p));
   2: var buildStatus = GetCumulativeBuildStatusFrom(projects);
   3: Eventing.Raise(new BuildStatusChanged(buildStatus));

This is much better. First, the build monitor no longer knows anything about any of the build indicators. Second, if a new build indicator ever is needed (like for an Ambient Orb), this code doesn't change at all.

Thinking in terms of SOLID, we've removed a responsibility from our build monitor, so it truly only has a single responsibility, and we've met the Open/Closed principal as well, because adding new build indicators doesn't require any changes to the code that monitors the build.

The code that this post is based on is open source on BitBucket. It's not exactly straightforward to use yet, but it does work - I use it every day to monitor our builds at TrackAbout. I'm working to make configuration easier, and once that's done, I'll write up a bit more about it.

I'm going to attempt to use Google+ for comments, so if you have anything to add, please leave a comment over there.


          Comment on Google's updated Drive client for Windows and Mac delayed by Lateef Alabi-Oki   
I think people are missing the point. The Linux and open source commnuities have built a set of tools that make Google money. It's dumb for Google to ignore the community that is responsible for the core of their business operations. The Linux kernel developers are not switching to Windows ever. The Google engineers who work on Linux for Chrome OS, Android, and their cloud services are not switching to Windows ever. Google is a Linux shop. Yet they pretend it doesn't exist.
          Climbing the Scholarly Publishing Mountain With SHERPA   

John MacColl and Stephen Pinfield explore the SHERPA project, which is concentrating on making e-prints available online.

 

JISC announced its FAIR Programme (Focus on Access to Institutional Resources) in January of this year. The central objective of the Programme is to test ways of releasing institutionally-produced content onto the web. FAIR describes its scope as: Read more about Climbing the Scholarly Publishing Mountain With SHERPA

JISC announced its FAIR Programme (Focus on Access to Institutional Resources) in January of this year. The central objective of the Programme is to test ways of releasing institutionally-produced content onto the web. FAIR describes its scope as: “to support access to and sharing of institutional content within Higher Education (HE) and Further Education (FE) and to allow intelligence to be gathered about the technical, organisational and cultural challenges of these processes.… This programme is part of a broader area of development to build an Information Environment for the UK’s Distributed National Electronic Resource.”(1) It specifically sought projects in the following areas:

· Support for disclosure of institutional assets including institutional e-print archives and other types of collections through the use of the OAI (Open Archives Initiative) protocol.
· Support for the harvesting of the metadata disclosed through this protocol into services which can be provided to the community on a national basis. These services may be based around subject areas or other groupings of relevance for learning and research.
· Support for disclosure of institutional assets through the use of other relevant protocols, for example Z39.50 and RSS.
· Exploration of the deposit of institutional collections with a community archive or to augment existing collections which have proven learning, teaching or research value.
· Experiments with the embedding of JISC collections and services in local institutional portals and how well they can be presented in conjunction with institutionally managed assets.
· Studies into the related issues and challenges of institutional asset disclosure and deposit, including collections management, IPR, technical, organisational, educational, cultural and digital preservation challenges.
FAIR awarded funding to 14 projects in five ‘clusters’: museums and images, e-prints, e-theses, intellectual property rights, and institutional portals (details are given in the Appendix). The Open Archives Initiative lay very firmly behind FAIR, as the call document says: “This programme is inspired by the vision of the Open Archives Initiative (OAI) (http://www.openarchives.org), that digital resources can be shared between organisations based on a simple mechanism allowing metadata about those resources to be harvested into services.… The model can clearly be extended to include…. learning objects, images, video clips, finding aids, etc. The vision here is of a complex web of resources built by groups with a long term stake in the future of those resources, but made available through service providers to the whole community of learning.”(2) The SHERPA project(3) represents the response to this vision of a number of major research libraries. It is concentrating on making ‘e-prints’ (electronic copies of research papers) available online. The bid was put together under the auspices of CURL (the Consortium of University Research Libraries) which is also contributing to the project funding. The project is being hosted by the University of Nottingham. The research library perspective The starting point of SHERPA is the view that the current system of research publication is not working. In this system the research community (predominantly universities) generates research output in the form of papers, which it then gives away free of charge to commercial publishers, who in turn sell it back to the research community at high prices. And the research community does not just give away its services as authors, but also as referees, editors and editorial board members, all mostly free of charge. Ironically, this is a system that does not ultimately work out in favour of researchers. As authors, the potential impact their research output may make is limited in this system since commercial publishers will normally shield their work behind ‘toll gates’ (journal subscriptions or article pay-per-view charges). As readers of the literature, they are prevented by these toll gates from gaining easy access to all of the publications in their field. Even libraries in large well-funded universities cannot afford subscriptions to anywhere near all peer-reviewed journals(4). Academic libraries are then placed in a difficult position. Journals account for a large proportion of most academic library budgets. And this proportion is growing. Over the last 15 years journal prices have risen by about 10% a year at a time when library budgets have grown by no more than 2 or 3%. Libraries have often had to divert money from other budgets to maintain subscriptions or simply cancel titles. In most cases, they have done both. Many library managers have, as a result, become increasingly frustrated by the system, and those in research universities more than most. It is, after all, these institutions, more than others, who are generating the research output, which they are having to buy back in large quantities and at high prices in order to support ongoing research. Librarians who are buying these publications on behalf of their institutions have been leading voices in saying ‘we cannot go on like this’. One possible solution is ‘self archiving’. Authors can make their own research output freely available outside the confines of commercial journals. Until recently, the best way of doing this was simply mounting it on a web site. However, this is not a particularly attractive prospect. It requires those carrying out literature searches to go to the web sites of individuals and research groups in potentially hundreds of different locations. Either that or rely on standard web search engines. Neither of these could give reliable comprehensive access. The Open Archives Initiative(5) Protocol for Metadata Harvesting (OAI-PMH) is a technical development which addresses this problem. Through the use of a ‘lowest common denominator’ metadata format (unqualified Dublin Core), it allows those producing metadata for all types of digital objects to ‘expose’ their metadata on the internet. The metadata can then be automatically harvested, collected together and made available in a searchable form. The real potential of the protocol lies in its support for interoperability. It is a tool for building union catalogues from a potentially vast range of different collections, and it therefore exploits the ubiquity of the internet to make virtually possible what is physically impossible. E-prints, whether ‘pre-prints’ (which have not yet been peer-reviewed), or ‘post-prints’ (which have), can be deposited and described by the authors themselves or perhaps third parties and made easily available to users. Through the OAI-PMH the metadata created can contribute to a vast worldwide network of resources which can be easily searched. Of course, the ‘invisible college’ has always operated like this in any case (albeit in a limited way). Researchers do in some cases make free copies of their research available to their peers – via conferences, and on web sites. An interesting variant of this is the culture of working papers produced by academic staff belonging to particular institutions. However, this is an exclusive method of communication. Senior researchers in any discipline will know which institutions across the world have the strongest departments, or those with research interests which match their own – but what about junior researchers, or researchers in interdisciplinary areas? They may miss out on accessing this research. The potential impact of the research is then still limited. Making searchable metadata about these papers easily available would be a big step forward in addressing this problem. Benefits of OAI-PMH to institutions and their libraries With a system of OAI-compliant archiving, e-print repositories could replicate content only otherwise available commercially. Making content freely accessible in this way has the potential to improve scholarly communication (by lowering impact and access barriers) but it also has the potential to save institutions and their libraries money. Freeing-up access to the research literature and ensuring it is easily searchable will mean that commercial publishers have to pare down their profit margins and concentrate on adding value in order to retain customers. But of course, it is likely to take a long time before there is a critical mass of content available. This is a massive mountain to be climbed. In some disciplines real progress has already been made. The case of the high-energy physicists who have been using arXiv.org(6) for more than a decade is well-known, but few other disciplines have yet shown an interest in organising themselves around a centralised discipline-specific repository in this way. One suggested means of redressing this is to put the emphasis on repositories at the institutional level instead of the disciplinary. That is what the SHERPA project – located within the e-prints cluster of the FAIR Programme – will seek to test in the UK. If the impetus comes from within the university, with institutional support mechanisms in place to permit the growth of an institutional repository, then the current unevenness in the disciplinary spread of the free corpus may be reduced(7). Over time, the argument goes, a snowball effect will operate within institutions, and at a national – and international – level, so that a multi-disciplinary free collection of research literature can be built. The institutional library service is in many ways the natural co-ordinator of this activity, performing the role of infrastructure provider. As part of the SHERPA project, a number of CURL libraries will begin to take on this role. Six open access e-print repositories will be funded within the project: at the Universities of Edinburgh, Glasgow, Oxford and Nottingham, together with a shared archive within the ‘White Rose’ partnership of York, Leeds and Sheffield, and one at the British Library for the research outputs of ‘non-aligned’ researchers. They will use the open source eprints.org(8) software produced by the University of Southampton. The project will investigate the technical and managerial aspects of running these repositories. After the initial work is complete, it is hoped that other institutions will be able to come on board. SHERPA will be setting up OAI-compliant e-print repositories but it will not (in the first instance at least) be creating aggregated search services. This will be done by others, including new projects funded as part of FAIR. One such project, e-prints UK, will be working in partnership with SHERPA to achieve the best ways of creating metadata so that it can be effectively harvested. One of the key elements of OAI is this separation between repositories (‘Data Providers’) and search services (‘Service Providers’). FAIR gives us an opportunity to try this model out within real organisations. With this experience SHERPA hopes to be in a good position to advise others on setting up these kinds of services from scratch for themselves. In the short term, the biggest challenge of all is not a technical or managerial one but a cultural one. We need to convince academics that they must also join the expedition. Librarians should now take on the role of change advocates. SHERPA will aim to contribute to this advocacy. Major advocacy campaigns will be mounted in CURL institutions supporting the institutional archive agenda. It is also hoped to contribute to the wider campaign beyond these institutions as well. SHERPA will, for example, put materials used and lessons learned into the public domain. It hopes to be one of the growing number of voices in the academic community arguing for change. Quality content One of the key ways of winning over researchers is by demonstrating that e-print repositories can provide access to the quality literature. There are widely held views that free literature on the web is normally of poor quality and that open access repositories are not an appropriate medium for publishing peer-reviewed research. For this reason SHERPA aims to concentrate on collecting refereed content. It will not reject other forms of papers, but it will seek post-prints as its first priority. Authors will be encouraged to post their work on their institutional repository as well as having it published in journals. Having a good proportion of refereed articles searchable within the SHERPA corpus will help to demonstrate the viability of the approach. Another reason to focus on refereed material is that it is likely that this will define which items in the SHERPA collections are selected for digital preservation. While a pre-print which an author never intends to submit for peer review may still be worth preserving, generally the approach will be to preserve articles once they are in their final form – and this is most easily witnessed by their appearance in the journal literature. The approach taken by SHERPA will then be to collect papers which have been (or will be) also published in the peer-reviewed literature. For these reasons, SHERPA is keen to engage publisher support for the project. The very choice of the name, indeed, is designed to convey this. ‘Securing a Hybrid Environment for Research Preservation and Access’. This particular ‘hybrid environment’ is one in which a free corpus of research literature can exist alongside a commercial one, and is not necessarily in conflict with it. As the example of high energy physics shows, open access e-print archives do not necessarily kill journals. Journals may however have to change their roles, possibly focusing on managing the peer-review process and adding value to the basic content (both of which of course cost money) rather than being sole distributors of content. The SHERPA project wants to work alongside publishers to investigate how the field of scholarly communication may take shape in the future. Copyright A key issue here is copyright. It is common for commercial publishers to require authors to sign over copyright to them before they will publish an article. In some cases, this will give the publisher exclusive publication rights and the author will not be able to self archive the paper. The idea that authors should continue to submit their work to journals but also post their work on e-print repositories runs into problems here. How can projects like SHERPA deal with this? Firstly, it should be recognised that not all publishers require copyright sign-over. A good number of publishers allow authors to keep copyright. Since authors (to a certain extent) have the choice about where they place their papers, advocates of self-archiving can encourage authors to place their papers with publishers of this sort and thus retain copyright. Where copyright sign-over is required by publishers, the author is sometimes still permitted to distribute a paper for non-commercial purposes outside the confines of the journal. Some publishers have copyright agreements which explicitly allow the posting of e-prints. Once again, authors can be encouraged to submit papers to these publishers. One thing that SHERPA will aim to do will be to examine the copyright agreements of different publishers and publicise what their agreements will and will not allow. Where exclusive rights are normally expected to be signed over, a number of possible strategies may be adopted. Firstly, SHERPA intends to help authors to negotiate with publishers in order to allow them to self archive. One possible way in which this may be done is to produce a standard ‘back licence’ document that can be appended by authors to publisher copyright agreements. Such a back licence might state that the author is signing the publisher’s own licence but subject to the terms of the back licence, and the back licence in turn allows the author to retain the right to self archive the work in a non-commercial repository. In other cases, SHERPA hopes to negotiate directly with publishers to persuade them to grant the project a blanket waiver which allows articles to be posted on SHERPA archives at least for the duration of the project. This may not be as difficult as it might at first appear. The editor-in-chief of an Elsevier journal in informatics, one of the professors of informatics at the University of Edinburgh, recently pursued Elsevier over its policy regarding e-prints. He received a reply in the Bulletin of the European Association for Theoretical Computer Science for October 2001, in an article entitled ‘Recent Elsevier Science Publishing Policies’, which stated ‘… the exclusive distribution rights obtained by Elsevier Science refer to the article as published, bearing our logo and having exactly the same appearance as it has in the journal. Authors retain the right to keep preprints of their articles on their homepages (and/or relevant preprint servers) and to update their content, for example to take account of errors discovered during the editorial process, provided these do not mimic the appearance of the published version. They are encouraged to include a link to Elsevier Science’s online version of the paper to give readers easy access to the definitive version.’(9) This is an interesting departure for Elsevier and perhaps indicates that some publishers are keen to investigate these issues further. Even where there is no interest, things can be done. SHERPA will also investigate ways in which the Harnad-Oppenheim strategy(10) can be employed effectively and appropriately. Digital preservation The SHERPA project is also keen to pursue another objective. The CURL Directors, in considering the potential of the Open Archives Initiative, were very interested in the archiving dimension. They wanted a project which would ‘put the archiving into Open Archives’. The reason for this is that, as we move into an electronic-journal-dominated future for research, there are real concerns emerging about the preservation of digital material. Who should take responsibility for the preservation of the academic record? This has traditionally been a research library activity. Peter Hirtle, writing in D-Lib in April 2001, stated: “an OAI system that complied with the OAIS reference model, and which offered assurances of long-term accessibility, reliability, and integrity, would be a real benefit to scholarship.”(11) OAIS is the Open Archival Information System(12) (a completely different standard from OAI-PMH), which emerged in 1999 from work done in NASA on designing a reference model for preserving space data. The model was seized upon by the digital preservation world generally, and used within the JISC-funded CURL Exemplars in Digital Archives (CEDARS) project(13). CURL therefore had a strong interest in implementing an OAIS-based digital preservation project, having initiated the successful work in OAIS model development undertaken by the CEDARS project since 1998. We expect that SHERPA will also be engaged in digital preservation activity for the contents of its archives later in the project, and are talking to funding agencies and various partners about the prospects for this. Conclusion The current structure of scholarly communication may have made some sense in a paper-based world. However, in a digital world it is looking increasingly anomalous. Where there is a need for the rapid and wide dissemination of content to the research community, it is found wanting. It is also extremely expensive for the very research community it is trying to serve. The development of institutional repositories is one possible response to the current problems. SHERPA is one project which hopes to go some way in testing out this model. There are key technical, managerial, and cultural issues which need tackling urgently. As the project begins to do this it will disseminate the lessons learned to the wider community in the hope that others will begin the process as well. SHERPA is, of course, just one project within a larger programme. FAIR is just one programme within a larger set of international developments. But it is hoped that FAIR projects, along with others working in this area, can begin to generate some kind of momentum which will enable us to improve the way in which scholarship is carried out in the future. Appendix: FAIR projects Museums and Images Cluster (4 projects)
· Petrie Museum, University College London - Accessing the Virtual Museum
· Fitzwilliam Museum, University of Cambridge; Archaeology Data Service, University of York - Harvesting the Fitzwilliam
· AHDS Executive, King’s College London; Theatre Museum, V&A; Courtald Institute of Art, University of London; Visual Arts Data Service, University of Surrey; Performing Arts Data Service, University of Glasgow - Partial Deposit
· ILRT, University of Bristol; University of Cambridge - BioBank
E-Prints Cluster (4 projects)
· CURL (University of Nottingham; University of Edinburgh; University of Glasgow; Universities of Leeds, Sheffield and York (‘White Rose’ partnership); University of Oxford; British Library) - SHERPA (Securing a Hybrid Environment for Research Preservation and Access)
· RDN, King’s College London; University of Southampton; UKOLN, University of Bath; UMIST; University of Bath; University of Strathclyde; University of Leeds; ILRT, University of Bristol; Heriot Watt University; University of Birmingham; Manchester Metropolitan University; University of Oxford; University of Nottingham; OCLC - E-prints UK
· University of Strathclyde; University of St. Andrews; Napier University; Glasgow Colleges Group - Harvesting Institutional Resources in Scotland Testbed
· University of Southampton - Targeting Academic Research for Deposit and dISclosure
E-Theses Cluster (3 projects)
· Robert Gordon University; University of Aberdeen; Cranfield University; University of London; British Library - Electronic Theses
· University of Edinburgh - Theses Alive!
· University of Glasgow - DAEDALUS
Intellectual Property Rights Cluster (1 project)
· Loughborough University; Birkbeck College, University of London; University of Greenwich; University of Southampton - Machine-readable rights metadata
Institutional Portals Cluster (2 projects)
· University of Hull; RDN, King’s College London; UKOLN, University of Bath - Presenting natiOnal Resources To Audiences Locally
· Norton Radstock College, Bristol; City of Bath College; City of Bristol College; Filton College, Bristol; Weston College, Weston-super-Mare; Western College Consortium, Bristol - FAIR Enough
Author Details John MacColl is Sub-Librarian (Online Services) and Director of SELLIC at the University of Edinburgh. Stephen Pinfield is Assistant Director of Information Services at the University of Nottingham and Director of SHERPA. Both are members of the CURL Task Force for Scholarly Communication. References
(1) http://www.jisc.ac.uk/pub02/c01_02.html
(2) http://www.jisc.ac.uk/pub02/c01_02.html
(3) http://www.sherpa.ac.uk
(4) See Stevan Harnad, ‘The self-archiving initiative’ Nature: webdebates. <http://www.nature.com/nature/debates/e-access/Articles/harnad.html>
(5) See http://www.openarchives.org
(6) http://www.arxiv.org
(7) See Raym Crow The case for institutional repositories: a SPARC position paper. Washington, DC: SPARC, 2002. Release 1.0. <http://www.arl.org/sparc/IR/ir.html>
(8) http://www.eprints.org
(9) Arjen Sevenster ‘Recent Elsevier Science publishing policies’. Bulletin of the European Association for Theoretical Computer Science 75, October 2001, 301-303
(10) Stevan Harnad, ‘For whom the gate tolls? How and why to free the refereed research literature online through author/institution self-archiving, now’, Section 6. <http://www.cogsci.soton.ac.uk/~harnad/Tp/resolution.htm#Harnad/Oppenheim>
(11) Peter Hirtle, ‘Editorial: OAI and OAIS: What’s in a name?’ D-Lib Magazine 7, 4, April 2001 <http://www.dlib.org/dlib/april01/04editorial.html>
(12) See Consultative Committee for Space Data Systems Reference model for an open archival information system (OAIS), 1999 <www.ccds.org/documents/p2/CCSDS-650.0-R-1.pdf>
(13) http://www.leeds.ac.uk/cedars/
Article Title: "Climbing the Scholarly Publishing Mountain with SHERPA" Author: John MacColl and Stephen Pinfield Publication Date: 10-Oct-2002 Publication: Ariadne Issue 33 Originating URL: http://www.ariadne.ac.uk/issue33/sherpa/intro.html

Issue number:

Article type:

Date published: 
Thu, 10/10/2002
Issue 33
issue33_sherpa
http://www.ariadne.ac.uk/issue33/sherpa/

This article has been published under copyright; please see our access terms and copyright guidance regarding use of content from this article. See also our explanations of how to cite Ariadne articles for examples of bibliographic format.


          Web Focus: Let's Get Serious about HTML Standards   

Brian Kelly encourages authors to treat compliance with HTML standards seriously.

If you talk to long-established Web authors or those responsible for managing large Web sites or developing Web applications intended for widespread use in a heterogeneous environment you are likely to find that the need for compliance with Web standards is well-understood. Read more about Web Focus: Let's Get Serious about HTML Standards

Web Focus: Ariadne Issue 33 Web Focus: Let's Get Serious About HTML Standards Brian Kelly encourages authors to treat compliance with HTML standards seriously. If you talk to long-established Web authors or those responsible for managing large Web sites or developing Web applications intended for widespread use in a heterogeneous environment you are likely to find that the need for compliance with Web standards is well-understood. There will be an understanding of the need to avoid a re-occurrence of the "browser wars" and to minimise the development time for an environment in which, especially in the higher education community, end users are likely to use a wide range of platforms (MS Windows, Apple Macintosh, Linux, etc.) and browsers (Internet Explorer, Netscape, Mozilla, Galleon, Lynx, etc.). However although many experienced Web developers will state their commitment to Web standards, such aspirations are not always implemented in practice. This may be because the importance of HTML compliance is not communicated widely within an organisation (especially when there are likely to be many authors, as is likely to be the case within higher educational institutions); because HTML authoring tools fail to implement standards or because authors do not accept the need for standards and will either make use of non-standard features or fail to actively address non-compliance with standards. This article aims to persuade HTML authors of the importance of compliance with HTML standards. The article also provides an update on Web standards and contains advice on techniques of ensuring that resources comply with standards and for checking for compliance. The Dangers Of Failures To Comply With Standards Does compliance with HTML standards really matter? Surely if the page looks OK in Netscape and Internet Explorer Web browsers this will be sufficient? Testing compliance with HTML standards by visual inspection is not satisfactory for the simple reason that Web browsers are designed to process Web pages which do not comply with standards as best they can. However one should not use this permissive approach by Web browsers as a justification for not bothering with compliance with standards. Strict compliance with HTML standards is important for several reasons:

Avoiding Browser Lockin
Web pages which make use of proprietary browser features will not be accessible to other browsers. As we have seen with Netscape, even if a browser vendor has a significant market share there is no guarantee that this state of affairs with continue indefinitely.
Maximise Access To Browsers
Certain browsers may be more lenient with errors than others.
Maximise Accessibility
Web resources which comply with HTML standards will be more easily processed by screen readers and other accessibility devices.
Avoidance Of Court Cases
If, for example, Web-based teaching and learning resources are not accessible to students with disabilities, students may have a case, under the SENDA legislation which becomes law in September 2002, to sue the organisation.
Enhance Interoperability
Web resources which comply with HTML standards will be more easily processed by software tools, allowing for greater interoperability of the resource.
Enhance Performance
Web resources which comply with HTML standards, especially the XHTML standard, are likely to be processed and displayed more efficiently since the HTML parser will be able to process a valid resource and not check for errors as existing Web browsers are forced to do.
Facilitate Debugging
Web resources which comply with HTML standards should be easier to debug if the pages are not rendered correctly.
Facilitate Migration
Web resources which comply with HTML standards should be more easily ported to other environments.
It should be noted that when HTML resources need to be reused by other applications, there is an increasing requirement for the resources to comply rigourously with HTML standards. Arguing that a resource is almost compliant is like describing someone as almost a virgin! HTML Standards If HTML standards are important, which standards should be used? Many organisations are likely to have standardised on the HTML 4.01 specification [1]. Many widely-used HTML authoring tools can be used to create HTML pages which comply with this standards]. However HTML 4 is no longer the latest version of HTML. The latest version, is XHTML 1.0 [2]. This recommendation, which is recommended for use by W3C (the World Wide Web Consortium), became an official W3C Recommendation in January 2000. XHTML is a "reformulation of HTML 4 in XML 1.0" which means that it will be able to be used in conjunction with XML tools and will benefit from developments of the XML language, as described in "The XHTML Interview" [3] One benefit of XHTML which is worth noting is the XSLT language which can be used for converting an XML resource into another format, which could be another XML application or a non-HTML format, such as PDF. However in order for XSLT to work, the XML resource must be compliant. Ideally organisations should standardise on XHTML 1, as this. However there may be obstacles to the use of XHTML 1.0 as an organisational standard, such as the need to upgrade authoring tools, provide training, etc. If this is the case then HTML 4.01 should be the organisational standard. Versions of HTML prior to this should not be used, as they do not provide an adequate level of support for accessibility. Implementation Issues Whether you are using XHTML or HTML 4.01 there are a small number of elements which you must use in order to ensure your resources are compliant. Your Web page must begin with the document type declaration (DOCTYPE). For XHTML this is of the form: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> whereas for an HTML 4.01 document it could be: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> The document type declaration is used to specify which type of HTML is to be used. If the first example above the document is an XHTML 1.0 transitional document, whereas in the second example the document is an HTML 4.0 transitional document. Once the DOCTYPE has been defined you should give the <html> element. If you are using XHTML, you will have to specify the namespace: <html xmlns="http://www.w3.org/1999/xhtml"> Why is this needed? XHTML is an XML application and XML can be regarded as a meta-language which can be used to create other languages - for example MathML - the Mathematical Markup Language [4]. Since it may be necessary to create resources which combine languages (for example an XHTML document which contains mathematical formulae) a namespace is needed to differentiate the XHTML element names from those belonging to MathML. In the document's <meta> element you should specify the character set for the document: <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"> Your XHTML will therefore have the following basic structure: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>XHTML Template</title> <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/> </head> <body> </body> </html> Note that the elements shown above can be regarded as mandatory for most XHTML documents (the DOCTYPE could be replaced by a more rigourous definition, but the one given is suitable for most purposes). The format of HTML 4.01 documents will be : <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>XHTML Template</title> <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/> </head> <body> </body> </html> Note that the DOCTYPE shown above is mandatory for most HTML 4.01 documents (it could be replaced by a more rigourous definition, but the one given is suitable for most purposes). If you are updating the template for resources on your Web site it would be useful to include a definition of the language type: <html xmlns="http://www.w3.org/1999/xhtml" lang="en-gb"> or <html lang="en-gb"> Although not mandatory the language definition is needed if you wish to seek compliance with the W3C WAI AAA guidelines [5]. Ensuring Compliance With HTML Standards We have seen some of the mandatory elements of XHTML and HTML 4. These must appear in compliant documents. Ideally these will be included in templates provided to Web page authors or generated by a Content Management System, by use of XSLT, backend scripts, SSIs (server-side includes), etc. However, as we know, agreeing on a standard and providing templates do not necessarily mean that compliant documents will be produced: authoring tools may still fail to produce compliant resources, templates may be altered, etc. There will still be a need to test resources for compliance with standards. There are several approaches for the checking of compliance with HTML standards:
Checking Within The Authoring Tool
Many HTML authoring tools provide HTML compliance checking facilities. However it should be noticed that (a) compliance with the XHTML standard may not be possible and (b) authoring tools which work with HTML fragments may not provide correct results.
External HTML Validation Tools
A number of HTML validation tools are available. These include desktop tools such as CSE HTML Validator [6] and Doctor HTML [7] and Web-based tools such as the W3C HTML Validator [8] and the WDG HTML Validator [9].
Although many HTML validation tools are available use of them to check individual pages is difficult if you have an existing large Web site to maintain. In addition if the validation process is separate from the page creation or maintenance process it is likely that the validation process will be forgotten about. There are ways of addressing these problems, such as use of tools which can validate entire Web sites and integrating the validation with the page maintenance process. A number of tools can validate entire Web sites, such as CSE HTML Validator Professional 6.0 [6] and the WDG HTML Validator mentioned previously [10]. Another approach is to embed a live link to an online validation service, allowing the page to be validated by clicking on the link. This approach was used on the Institutional Web Management Workshop 2002 Web site [11] as illustrated below. Figure 1: Embedded Links to Validation Services A refinement to this approach could be to provide a personalised interface to such validation links, so that the icons are seen only by the page maintainer. This could be implemented through, for example use of cookies. Another approach, which ensures that validation services can be integrated with the Web browser is to make use of a technique sometimes referred to as "bookmarklets". With this approach a bookmark to, for example, a validation service is added to your Web browser. The bookmarklet can be configured so that it will analyse the page which is currently being viewed, thus avoiding the need to copy and paste URLs. Use of this type of service is illustrated below. Figure 2: Use Of "Bookmarklets" A number of bookmarklets, together with further information on how they work, is available from the Bookmarklets Web site [12]. In addition to these approaches it is likely that we will see a growth in commercial Web site auditing and testing tools such as LinkScan Server and Workstation software [13] and services such as that provided by Business2WWW [14]. Challenges In Ensuring Compliance This article has described the importance of compliance with HTML standards and has described some of the key elements of XHTML and HTML 4.01 documents and a number of tools and approaches for ensuring that documents comply with standards. However even if Web managers provide tools to create XHTML-compliant resources, if is still likely that on large Web sites non-compliant resources will be created. This is especially likely when Web resources are created using third-party software over which little control in the output format is available. This is true of, for example, Microsoft Office files, although, to be fair to Microsoft, the open source Open Office software [15] also does not support XHTML output. What can be done in such cases? The best advice is to ensure that the resource is available in HTML, even if the HTML fails to comply with standards. This will ensure the resource is available to standard Web browsers, even if the resource cannot easily be repurposed. In the case of software such as Microsoft Office, which provide an option for the type of HTML to be generated, you should ensure that the HTML output can be viewed by a wide range of browsers and is not optimised for particular browsers. In the case of widely used proprietary formats for which viewers are freely available you should probably provide a link to both the HTML and the proprietary version. Another option, in cases where conversion to HTML may be time-consuming, would be to provide a link to a online conversion service, such as Adobe's online conversion tool which can convert PDF to HTML [16]. Further Information A good starting point for further information on Web and HTML standards is the The Web Standards Project - a group which "fights for standards that reduce the cost and complexity of development while increasing the accessibility and long- term viability of any site published on the Web" [17]. The Web Standards Project provides a valuable FAQ on "What are web standards and why should I use them?" [18]. IBM provide a useful introduction to XHTML, which provides a more complete description of the mandatory features of XHTML [19]. HotWired provide a useful summary of work of the W3C and The Web Standards Project in an article on "Web Standards For Hard Times" [20]. Finally the W3C are in the process of developing guidelines on "Buying Standards Compliant Web Sites" [21]. They have also recently set up the public-evangelist mailing list which provides a forum for discussion of Web standards [22]. References
  1. HTML 4.01 Specification, W3C http://www.w3.org/TR/html4/
  2. XHTML 1.0 , W3C,
  3. The XHTML Interview, Exploit Interactive, issue 6, 26th June 2000 http://www.exploit-lib.org/issue6/xhtml/
  4. W3C Math Home, W3C http://www.w3.org/Math/
  5. Web Content Accessibility Guidelines 1.0, W3C http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/#tech-identify-lang
  6. CSE HTML Validator, http://www.htmlvalidator.com/
  7. Doctor HTML, http://www2.imagiware.com/RxHTML/
  8. W3C Validation Service, W3C http://validator.w3.org/
  9. WDG HTML Validator, WDG http://www.htmlhelp.com/tools/validator/
  10. WDG HTML Validator - Batch Mode, WDG http://www.htmlhelp.com/tools/validator/batch.html
  11. Institutional Web Management Workshop 2002, UKOLN http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-2002/
  12. Bookmarklets Home Page, Bookmarklets http://www.bookmarklets.com/
  13. Linkscan, Elsop http://www.elsop.com/
  14. Business2WWW - SiteMorse Automated Web Testing, Business2WWW http://www.business2www.com/
  15. OpenOffice, OpenOffice.org http://www.openoffice.org/
  16. PDF Conversion, Adobe http://access.adobe.com/simple_form.html
  17. The Web Standards Project, The Web Standards Project http://www.webstandards.org/learn/faq/
  18. What are web standards and why should I use them?, The Web Standards Project http://www.webstandards.org/
  19. XHTML 1.0: Marking up a new dawn, IBM http://www-106.ibm.com/developerworks/library/w-xhtml.html
  20. Web Standards for Hard Times, HotWired http://hotwired.lycos.com/webmonkey/02/33/index1a.html
  21. Buy Standards Compliant Web Sites, W3C http://www.w3c.org/QA/2002/07/WebAgency-Requirements
  22. public-evangelist@w3.org Mail Archive, W3C http://lists.w3.org/Archives/Public/public-evangelist/
Author Details Brian Kelly UK Web Focus UKOLN University of Bath Bath BA2 7AY Email: b.kelly@ukoln.ac.uk Brian Kelly is UK Web Focus. He works for UKOLN, which is based at the University of Bath
Article Title: "Let's Get Serious About HTML Standards" Author: Brian Kelly Publication Date: Sep-2002 Publication: Ariadne Issue 33 Originating URL: http://www.ariadne.ac.uk/issue33/web-focus/

Issue number:

Article type:

Authors:

Date published: 
Thu, 10/10/2002
Issue 33
issue33_web_focus
http://www.ariadne.ac.uk/issue33/web-focus/

This article has been published under copyright; please see our access terms and copyright guidance regarding use of content from this article. See also our explanations of how to cite Ariadne articles for examples of bibliographic format.


          LXer: Linux Rolls Out to Most Toyota and Lexus Vehicles in North America   
At the recent Automotive Linux Summit, held May 31 to June 2 in Tokyo, The Linux Foundation’s Automotive Grade Linux (AGL) project had one of its biggest announcements in its short history: The first automobile with AGLs open source Linux based Unified Code Base (UCB) infotainment stack will hit the streets in a few months.
          NFV Automation: Windstream joins ONAP to Drive Adoption of Open Standards   

Windstream, a global provider of advanced network communications, announced has joined the Open Network Automation Platform (ONAP) Project, underscoring the company’s commitment to collaborating with peers in the open source world and supporting breakthroughs and innovation within the communications industry, as well as helping to set the stage for building uniform standards for software defined […]

The post NFV Automation: Windstream joins ONAP to Drive Adoption of Open Standards appeared first on Telecom Drive.


          Former Libertarian and Reform Party Presidential Candidate Appears on Infowars   

Robert David Steele

Robert David Steele, a 2012 candidate for the Reform Party’s presidential nomination and briefly a candidate for the Libertarian Party’s 2016 presidential nomination, appeared on broadcaster Alex Jones’ Infowars program this past week.

Notably, Steele, an open source activist who says he is a former CIA agent, said the following during the appearance:

We actually believe that there is a colony on Mars, populated by children who were kidnapped and sent into space on a 20 year ride so that once they get to Mars they have no alternative but to be slaves on the Mars colony.

Read more ...
          Devices: Tesla, Ubuntu Core, Julia, DEN, Synopsys, MinnowBoard, AGL and More   

          Games: Steam Linux Usage, Rocket Wars, KDE, and Retro   
  • Steam Linux Usage Saw A Notable Decline For June 2017

    The reported Steam Linux market-share according to Valve is now just 0.72%, or a drop of 0.09%. Usually we don't see close to 0.1% swings for the Linux market-share in a given month, which is a bit surprising especially during the summer months and when seeing Linux releases last month with titles like Dawn of War III. This is also well off the initial Steam Linux highs of around 2%. Granted, yes, one can argue that the Steam market is continually getting larger so there may be more Linux gamers today than a few years ago. Some also argue about potential inaccuracies in the Steam Survey's reliability.

  • A few thoughts about Rocket Wars, a hectic local multiplayer experience

    If shooting at bots or friends while controlling spacecraft is your idea of a good time, there’s good reason for you to take a look at this action game.

  • GSoC-First month analysis

    Animations are yet to be done where the seeds are seen moving in each turn to make it more interactive. Next in turn is the AI mode where I will use Alpha beta pruning for making the computer give competition to the player Smile To play the two player mode you can see Code Repository . More about the AI mode in the next blog post.