An identifying relationship

An identifying relationship “describes a situation in which the existence of a row in the child table depends on a row in the parent table.”

“if a child identifies its parent, it is an identifying relationship.”

The technical definition of an identifying relationship is that a child’s foreign key is part of its primary key.

CREATE TABLE AuthoredBook (
  author_id INT NOT NULL,
  book_id INT NOT NULL,
  PRIMARY KEY (author_id, book_id),
  FOREIGN KEY (author_id) REFERENCES Authors(author_id),
  FOREIGN KEY (book_id) REFERENCES Books(book_id)
);

See? book_id is a foreign key, but it’s also one of the columns in the primary key. So this table has an identifying relationship with the referenced table Books. Likewise it has an identifying relationship with Authors.

A comment on a YouTube video has an identifying relationship with the respective video. The video_id should be part of the primary key of the Comments table.

CREATE TABLE Comments (
  video_id INT NOT NULL,
  user_id INT NOT NULL,
  comment_dt DATETIME NOT NULL,
  PRIMARY KEY (video_id, user_id, comment_dt),
  FOREIGN KEY (video_id) REFERENCES Videos(video_id),
  FOREIGN KEY (user_id) REFERENCES Users(user_id)
);

It may be hard to understand this because it’s such common practice these days to use only a serial surrogate key instead of a compound primary key:

CREATE TABLE Comments (
  comment_id SERIAL PRIMARY KEY,
  video_id INT NOT NULL,
  user_id INT NOT NULL,
  comment_dt DATETIME NOT NULL,
  FOREIGN KEY (video_id) REFERENCES Videos(video_id),
  FOREIGN KEY (user_id) REFERENCES Users(user_id)
);

This can obscure cases where the tables have an identifying relationship.

I would not consider SSN to represent an identifying relationship. Some people exist but do not have an SSN. Other people may file to get a new SSN. So the SSN is really just an attribute, not part of the person’s primary key.

You can take a look at MySQL Manual, explaining how to add Foreign Keys on MySQL Workbench as well.

Adding Foreign Key Relationships Using an EER Diagram

The vertical toolbar on the left side of an EER Diagram has six foreign key tools:

  • one-to-one non-identifying relationship
  • one-to-many non-identifying relationship
  • one-to-one identifying relationship
  • one-to-many identifying relationship
  • many-to-many identifying relationship
  • Place a Relationship Using Existing Columns

An identifying relationship is one where the child table cannot be uniquely identified without its parent. Typically this occurs where an intermediary table is created to resolve a many-to-many relationship. In such cases, the primary key is usually a composite key made up of the primary keys from the two original tables. An identifying relationship is indicated by a solid line between the tables and a nonidentifying relationship is indicated by a broken line.

Create or drag and drop the tables that you wish to connect. Ensure that there is a primary key in the table that will be on the “one” side of the relationship. Click on the appropriate tool for the type of relationship you wish to create. If you are creating a one-to-many relationship, first click the table that is on the “many” side of the relationship, then on the table containing the referenced key. This creates a column in the table on the many side of the relationship. The default name of this column is table_name_key_name where the table name and the key name both refer to the table containing the referenced key.

When the many-to-many tool is active, double-clicking a table creates an associative table with a many-to-many relationship. For this tool to function there must be a primary key defined in the initial table.

Use the Model menu, Menu Options menu item to set a project-specific default name for the foreign key column (see Section 8.5.1.5.4, “The Relationship Notation Submenu”). To change the global default, see Section 6.4.5, “The Model Tab”.

To edit the properties of a foreign key, double-click anywhere on the connection line that joins the two tables. This opens the relationship editor.

Mousing over a relationship connector highlights the connector and the related keys as shown in the following figure. The film and the film_actor tables are related on the film_id field and these fields are highlighted in both tables. Since the film_id field is part of the primary key in the film_actor table, a solid line is used for the connector between the two tables.

VirtualBox guest additions

Higher screen resolution in VirtualBox?

  1. Start Virtual box and log into Ubuntu.
  2. Hit the right ctrl key so you can get your mouse pointer outside the virtual machine.

3.Go to top of virtual window, click on devices then select “Install Guest Additions” You will see a window pop up inside Ubuntu showing you that there are some new files mounted in a virtual CDROM drive. One of those files should be VBoxLinuxAdditions.run

You must run the file with some admin permissions so do that this way…

  1. Click inside the Ubuntu screen again then go to Applications – Accessories then Terminal. The terminal window is where you will run the file from, but first we must navigate to the correct directory.
  2. type this… cd /media/cdrom0 (then hit enter, there is a space after cd!)
  3. next type… dir (You should see amongst the files displayed VBoxLinuxAdditions.run)
  4. now type… sudo sh ./VBoxLinuxAdditions.run (yes, that is a full stop before the slash!)

after you hit enter and it has done its stuff, the files are now accessable from Ubuntu.

  1. You now need to reboot the virtual machine or press Ctrl+Alt+backspace.

PostgreSQL

PostgreSQL, often simply Postgres, is an object-relational database management system (ORDBMS) available for many platforms including Linux, FreeBSD, Solaris, Microsoft Windows and Mac OS X.[4] It is released under the PostgreSQL License, which is an MIT-style license, and is thus free and open source software. PostgreSQL is developed by the PostgreSQL Global Development Group, consisting of a handful of volunteers employed and supervised by companies such as Red Hat and EnterpriseDB.[5] It implements the majority of the SQL:2008 standard,[6] is ACID-compliant, is fully transactional (including all DDL statements), has extensible data types, operators, index methods, functions, aggregates, procedural languages, and has a large number of extensions written by third parties.

The vast majority of Linux distributions have PostgreSQL available in supplied packages. Mac OS X, starting with Lion, has PostgreSQL server as its standard default database in the server edition,[7][8] and PostgreSQL client tools in the desktop edition.

PostgreSQL has bindings for many programming languages such as C, C++, Python, Java, PHP, Ruby… It can be used to power anything from simple web applications to massive databases with millions of records.

 

Client Installation

If you only wish to connect to a PostgreSQL server, do not install the whole PostgreSQL package, but install the PostgreSQL client instead. To do this, use the following command

 

 sudo apt-get install postgresql-client

you then connect to the server with the following command

 

 psql -h server.domain.org database user

 

After you inserted the password you access PostgreSQL with line commands. You may for instance insert the following

 

 SELECT * FROM table WHERE 1;

You exit the connection with

 q

 

Installing PostgreSQL Database

  1. Install PostgreSQL using the apt-get command in gnome-terminal:
    1
    sudo apt-get install postgresql libpq-dev
  2. After installation is complete, change user to the PostgreSQL user:
    1
    sudo su - postgres
  3. You are now working as the PostgreSQL user. Now, let’s change your database password to be more robust. In this example, I’m setting the password as “s0meth1ng”:
    1
    2
    3
    4
    ~$: psql -d postgres -U postgres
    psql (9.1.3) Type "help" for help.
    postgres=# alter user postgres with password 's0meth1ng'; ALTER ROLE
    postgres=# q
  4. Restart the PostgreSQL database to let the changes take effect:
    1
    sudo /etc/init.d/postgresql restart

Installing and Setting Up pgAdmin III

  1. Install pgAdmin III using the apt-get command in gnome-terminal:
    1
    sudo apt-get install pgadmin3
  2. Once installed, you can launch pgAdmin III by quick-launching it in Alt-F2, then typing pgadmin3.
  3. Now, let’s add a new PostgreSQL database server to the list of servers. Go to File > Add Server, and enter the details as the following screenshot:
    pgAdmin III Server Configuration
  4. Once that is done, you’ll now see your new server at the list of servers on the left pane. Go ahead, and create your database. Have fun!
    List of Database Servers in pgAdmin III

 

Administration

pgAdmin III is a handy GUI for PostgreSQL, it is essential to beginners. To install it, type at the command line:

 

 sudo apt-get install pgadmin3

You may also use the Synaptic package manager from the System>Administration menu to install these packages.

 

Basic Server Setup

To start off, we need to change the PostgreSQL postgres user password; we will not be able to access the server otherwise. As the “postgres” Linux user, we will execute the psql command.

In a terminal, type:

 

sudo -u postgres psql postgres

Set a password for the “postgres” database role using the command:

password postgres

and give your password when prompted. The password text will be hidden from the console for security purposes.

Type Control+D to exit the posgreSQL prompt.

 

Create database

To create the first database, which we will call “mydb”, simply type:

 

 sudo -u postgres createdb mydb

 

Install Server Instrumentation for Postgresql 8.4 or 9.1

To install Server Instrumentation, you must install postgresql-contrib:

 

 sudo apt-get install postgresql-contrib

For “”Postgresql 9.1″”+ install the adminpack “extension”:

 

 sudo -u postgres psql
 CREATE EXTENSION adminpack;

 

Alternative Server Setup

If you don’t intend to connect to the database from other machines, this alternative setup may be simpler.

By default in Ubuntu, Postgresql is configured to use ‘ident sameuser’ authentication for any connections from the same machine. Check out the excellent Postgresql documentation for more information, but essentially this means that if your Ubuntu username is ‘foo’ and you add ‘foo’ as a Postgresql user then you can connect to the database without requiring a password.

Since the only user who can connect to a fresh install is the postgres user, here is how to create yourself a database account (which is in this case also a database superuser) with the same name as your login name and then create a password for the user:

 sudo -u postgres createuser --superuser $USER
 sudo -u postgres psql

 

 postgres=# password $USER

Client programs, by default, connect to the local host using your Ubuntu login name and expect to find a database with that name too. So to make things REALLY easy, use your new superuser privileges granted above to create a database with the same name as your login name:

 createdb $USER

Connecting to your own database to try out some SQL should now be as easy as:

 psql

To create a database with a user that have full rights on the database, use the following command:

 

sudo -u postgres createuser -D -A -P myuser
sudo -u postgres createdb -O myuser mydb

The first command line creates the user with no database creation rights (-D) with no add user rights -A) and will prompt you for entering a password (-P). The second command line create the database ‘mydb with ‘myuser‘ as owner.

This little example will probably suit most of your needs. For more details, please refer to the corresponding man pages or the online documentation.

$ psql -d postgres
postgres=# create role app_name login createdb;
postgres=# q

SQL Dump

The idea behind this dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:

pg_dump dbname > outfile

As you see, pg_dump writes its result to the standard output. We will see below how this can be useful.

pg_dump is a regular PostgreSQL client application (albeit a particularly clever one). This means that you can perform this backup procedure from any remote host that has access to the database. But remember that pg_dump does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in practice you almost always have to run it as a database superuser.

To specify which database server pg_dump should contact, use the command line options -h host and -p port. The default host is the local host or whatever your PGHOST environment variable specifies. Similarly, the default port is indicated by the PGPORT environment variable or, failing that, by the compiled-in default. (Conveniently, the server will normally have the same compiled-in default.)

Like any other PostgreSQL client application, pg_dump will by default connect with the database user name that is equal to the current operating system user name. To override this, either specify the -U option or set the environment variable PGUSER. Remember that pg_dump connections are subject to the normal client authentication mechanisms (which are described in Chapter 19).

An important advantage of pg_dump over the other backup methods described later is that pg_dump’s output can generally be re-loaded into newer versions of PostgreSQL, whereas file-level backups and continuous archiving are both extremely server-version-specific. pg_dump is also the only method that will work when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server.

Dumps created by pg_dump are internally consistent, meaning, the dump represents a snapshot of the database at the time pg_dump began running. pg_dump does not block other operations on the database while it is working. (Exceptions are those operations that need to operate with an exclusive lock, such as most forms of ALTER TABLE.)

Important: If your database schema relies on OIDs (for instance, as foreign keys) you must instruct pg_dump to dump the OIDs as well. To do this, use the -o command-line option.

24.1.1. Restoring the Dump

The text files created by pg_dump are intended to be read in by the psql program. The general command form to restore a dump is

psql dbname < infile

where infile is the file output by the pg_dump command. The database dbname will not be created by this command, so you must create it yourself from template0 before executing psql (e.g., with createdb -T template0 dbname). psql supports options similar to pg_dump for specifying the database server to connect to and the user name to use. See the psql reference page for more information.

Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist. If they do not, the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes this is what you want, but usually it is not.)

By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs:

psql --set ON_ERROR_STOP=on dbname < infile

Either way, you will only have a partially restored database. Alternatively, you can specify that the whole dump should be restored as a single transaction, so the restore is either fully completed or fully rolled back. This mode can be specified by passing the -1 or --single-transaction command-line options to psql. When using this mode, be aware that even a minor error can rollback a restore that has already run for many hours. However, that might still be preferable to manually cleaning up a complex database after a partially restored dump.

The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one server to another, for example:

pg_dump -h host1 dbname | psql -h host2 dbname

Important: The dumps produced by pg_dump are relative to template0. This means that any languages, procedures, etc. added via template1 will also be dumped by pg_dump. As a result, when restoring, if you are using a customized template1, you must create the empty database from template0, as in the example above.

After restoring a backup, it is wise to run ANALYZE on each database so the query optimizer has useful statistics; see Section 23.1.3 and Section 23.1.5 for more information. For more advice on how to load large amounts of data into PostgreSQL efficiently, refer to Section 14.4.

24.1.2. Using pg_dumpall

pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the pg_dumpall program is provided. pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is:

pg_dumpall > outfile

The resulting dump can be restored with psql:

psql -f infile postgres

(Actually, you can specify any existing database name to start from, but if you are loading into an empty cluster then postgres should usually be used.) It is always necessary to have database superuser access when restoring a pg_dumpall dump, as that is required to restore the role and tablespace information. If you use tablespaces, make sure that the tablespace paths in the dump are appropriate for the new installation.

pg_dumpall works by emitting commands to re-create roles, tablespaces, and empty databases, then invoking pg_dump for each database. This means that while each database will be internally consistent, the snapshots of different databases might not be exactly in-sync.

24.1.3. Handling Large Databases

Some operating systems have maximum file size limits that cause problems when creating large pg_dump output files. Fortunately, pg_dump can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods:

Use compressed dumps. You can use your favorite compression program, for example gzip:

pg_dump dbname | gzip > filename.gz

Reload with:

gunzip -c filename.gz | psql dbname

or:

cat filename.gz | gunzip | psql dbname

Use split. The split command allows you to split the output into smaller files that are acceptable in size to the underlying file system. For example, to make chunks of 1 megabyte:

pg_dump dbname | split -b 1m - filename

Reload with:

cat filename* | psql dbname

Use pg_dump’s custom dump format. If PostgreSQL was built on a system with the zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively. The following command dumps a database using the custom dump format:

pg_dump -Fc dbname > filename

A custom-format dump is not a script for psql, but instead must be restored with pg_restore, for example:

pg_restore -d dbname filename

See the pg_dump and pg_restore reference pages for details.

For very large databases, you might need to combine split with one of the other two approaches.

psql -d myDataBase -a -f myInsertFile

Have three choices to supply a password:

  1. set the PGPASSWORD environment variable. For details see the manual: http://www.postgresql.org/docs/current/static/libpq-envars.html
  2. use a .pgpass file to store the password. For details see the manual: http://www.postgresql.org/docs/current/static/libpq-pgpass.html
  3. use “trust authentication” for that specific user: http://www.postgresql.org/docs/current/static/auth-methods.html#AUTH-TRUST

DbVisualizer is a Java Swing app which can generate relation graphs from any JDBC source (including PostgreSQL). I’ve found the best way to view the generated graph (after you have found a layout you like) is to print it to PDF and use Preview.app to view the result. The built-in view is somewhat lacking.

There are also a few Graphviz options around including AutoDoc (example output). This may be a better option if you are automating the documentation generation. With a bit of work you can style the output quite a bit.

double-click on the DB/schema in the navigator pane and then select the “References” tab. If you select the whole DB it’ll give you all of the system tables too — you can filter out tables that you don’t want by selecting the “Specified Tables” option.

You can also try out SQL*Power Architect

To download it without the registration go directly to the project page on google code:

http://code.google.com/p/power-architect/

psql

Name

psql —  PostgreSQL interactive terminal

Synopsis

psql [option…] [dbname [username]]

Description

psql is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. Alternatively, input can be from a file. In addition, it provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks.

Options

-a
--echo-all
Print all input lines to standard output as they are read. This is more useful for script processing than interactive mode. This is equivalent to setting the variable ECHO to all.

-A
--no-align
Switches to unaligned output mode. (The default output mode is otherwise aligned.)

-c command
--command command
Specifies that psql is to execute one command string, command, and then exit. This is useful in shell scripts. Start-up files (psqlrc and ~/.psqlrc) are ignored with this option.

command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands with this option. To achieve that, you could pipe the string into psql, like this: echo 'x \ SELECT * FROM foo;' | psql. (\ is the separator meta-command.)

If the command string contains multiple SQL commands, they are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. This is different from the behavior when the same string is fed to psql’s standard input.

-d dbname
--dbname dbname
Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first non-option argument on the command line.

If this parameter contains an = sign, it is treated as a conninfo string. See Section 31.1 for more information.

-e

Ajax

Ajax (also AJAX; /ˈæks/; an acronym for Asynchronous JavaScript and XML)[1] is a group of interrelated web development techniques used on the client-side to create asynchronous web applications. With Ajax, web applications can send data to, and retrieve data from, a server asynchronously (in the background) without interfering with the display and behavior of the existing page. Data can be retrieved using the XMLHttpRequest object. Despite the name, the use of XML is not required (JSON is often used instead), and the requests do not need to be asynchronous.[2]

Ajax is not a single technology, but a group of technologies. HTML and CSS can be used in combination to mark up and style information. The DOM is accessed with JavaScript to dynamically display, and allow the user to interact with, the information presented. JavaScript and the XMLHttpRequest object provide a method for exchanging data asynchronously between browser and server to avoid full page reloads.

The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML and XML documents.[1] Objects in the DOM tree may be addressed and manipulated by using methods on the objects. The public interface of a DOM is specified in its application programming interface (API). The history of the Document Object Model is intertwined with the history of the “browser wars” of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the layout engines of web browsers.

Dojo or jQuery, quick answer

  • JQuery if you are new to javascript/web programming and only want to jazz up your pages a little. Also, if your project is only a few months and/or only a few hundred lines, pick JQuery. It will get you there faster.
  • Dojo if you have a large project and can spend time on a very steep learning curve and want to be able to create and re-use widgets, data connections and whatnot.

This answer do not take into account the “fun factor”. If your aim is to have fun JQuery will give you a quick fix but Dojo will be more rewarding in the long run.

JSON

JSON (/ˈsɒn/ JAY-sawn, /ˈsən/ JAY-sun), or JavaScript Object Notation, is a text-based open standard designed for human-readable data interchange. It is derived from the JavaScript scripting language for representing simple data structures and associative arrays, called objects. Despite its relationship to JavaScript, it is language-independent, with parsers available for many languages.

The JSON format was originally specified by Douglas Crockford, and is described in RFC 4627. The official Internet media type for JSON is application/json. The JSON filename extension is .json.

The JSON format is often used for serializing and transmitting structured data over a network connection. It is used primarily to transmit data between a server and web application, serving as an alternative to XML.

JSON’s basic types are:

  • Number (double precision floating-point format in JavaScript, generally depends on implementation)
  • String (double-quoted Unicode, with backslash escaping)
  • Boolean (true or false)
  • Array (an ordered sequence of values, comma-separated and enclosed in square brackets; the values do not need to be of the same type)
  • Object (an unordered collection of key:value pairs with the ‘:’ character separating the key and the value, comma-separated and enclosed in curly braces; the keys must be strings and should be distinct from each other)
  • null (empty)

Non-significant white space may be added freely around the “structural characters” (i.e. brackets “{ } [ ]”, colons “:” and commas “,”).

The following example shows the JSON representation of an object that describes a person. The object has string fields for first name and last name, a number field for age, an object representing the person’s address and an array of phone number objects.

{
    "firstName": "John",
    "lastName": "Smith",
    "age": 25,
    "address": {
        "streetAddress": "21 2nd Street",
        "city": "New York",
        "state": "NY",
        "postalCode": 10021
    },
    "phoneNumbers": [
        {
            "type": "home",
            "number": "212 555-1234"
        },
        {
            "type": "fax",
            "number": "646 555-4567"
        }
    ]
}

One potential pitfall of the free-form nature of JSON comes from the ability to write numbers as either numeric literals or quoted strings. For example, ZIP Codes in the northeastern U.S. begin with zeroes (for example, 07728 for Freehold, New Jersey). If written with quotes by one programmer but not by another, the leading zero could be dropped when exchanged between systems, when searched for within the same system, or when printed. In addition, postal codes in the U.S. are numbers but other countries use letters as well. This is a type of problem that the use of a JSON Schema (see below) is intended to reduce.

Since JSON is almost a subset of JavaScript, it is possible, but not recommended,[7] to parse most JSON text into an object by invoking JavaScript’s eval() function. For example, if the above JSON data is contained within a JavaScript string variable contact, one could use it to create the JavaScript object p as follows:

 var p = eval("(" + contact + ")");

The contact variable must be wrapped in parentheses to avoid an ambiguity in JavaScript’s syntax.[8]

The recommended way, however, is to use a JSON parser. Unless a client absolutely trusts the source of the text, or must parse and accept text that is not strictly JSON-compliant, one should avoid eval(). A correctly implemented JSON parser accepts only valid JSON, preventing potentially malicious code from being executed inadvertently.

 var p = JSON.parse(contact);

Browsers, such as Firefox 4 and Internet Explorer 8, include special features for parsing JSON. As native browser support is more efficient and secure than eval(), native JSON support is included in the recently-released Edition 5 of the ECMAScript standard.[9]

jQuery library wrap JSON object in function constructor and execute it immediately if JSON.parse is not present. This avoid using eval in the code.

 var p = new Function('return ' + contact ';')();

Despite the widespread belief that JSON is a JavaScript subset, this is not the case. Specifically, JSON allows the Unicode line terminators U+2028 line separator and U+2029 paragraph separator to appear unescaped in quoted strings, while JavaScript does not.[10] This is a consequence of JSON disallowing only “control characters”. This subtlety is important when generating JSONP.

Eclipse in ubuntu

Eclipse is a multi-language Integrated development environment (IDE) comprising a base workspace and an extensible plug-in system for customizing the environment. It is written mostly in Java. It can be used to develop applications in Java and, by means of various plug-ins, other programming languages including Ada, C, C++, COBOL, Fortran, Haskell, JavaScript, Perl, PHP, Python, R, Ruby (including Ruby on Rails framework), Scala, Clojure, Groovy, Scheme, and Erlang. It can also be used to develop packages for the software Mathematica. Development environments include the Eclipse Java development tools (JDT) for Java and Scala, Eclipse CDT for C/C++ and Eclipse PDT for PHP, among others.

The initial codebase originated from IBM VisualAge.[2] The Eclipse software development kit (SDK), which includes the Java development tools, is meant for Java developers. Users can extend its abilities by installing plug-ins written for the Eclipse Platform, such as development toolkits for other programming languages, and can write and contribute their own plug-in modules.

Released under the terms of the Eclipse Public License, Eclipse SDK is free and open source software (although it is incompatible with the GNU General Public License[3]). It was one of the first IDEs to run under GNU Classpath and it runs without problems under IcedTea.

Ubuntu, here are some steps that help you getting Eclipse working on Ubuntu

1. Install Sun Java JDK

#sudo apt-get install sun-java6-jdk

2.  Download Eclipse
You can go to official site http://www.eclipse.org/downloads/ and choose your edition,

Save to your Desktop

3. Extract Eclipse
Open Terminal, and execute:

#cd ~/Desktop
#tar xzf eclipse-php-galileo-linux-gtk.tar.gz (replace your downloaded file name here)
#sudo mv eclipse /opt/eclipse
#sudo mv eclipse-galileo.png /opt/eclipse
#cd /opt
#sudo chown -R root:root eclipse
#sudo chmod -R 755 eclipse
#cd /opt/eclipse
#sudo chmod +x eclipse

4. Create a .desktop file to eclipse:

gedit ~/.local/share/applications/opt_eclipse.desktop

Then, paste this inside (dont forget to edit Exec and Icon values):

[Desktop Entry]
Type=Application
Name=Eclipse
Comment=Eclipse Integrated Development Environment
Icon=** something like /opt/eclipse/icon.xpm **
Exec= ** something like /opt/eclipse/eclipse **
Terminal=false
Categories=Development;IDE;Java;
StartupWMClass=Eclipse

After that, open that folder with nautilus:

nautilus ~/.local/share/applications

If you want to use this launcher outside dash/launcher (ex: as a desktop launcher) you need to add execution permission by right clicking the file and choosing Properties -> Permissions -> Allow execution, or, via the command-line:

chmod +x ~/.local/share/applications/opt_eclipse.desktop

Finally drop opt_eclipse.desktop to launcher.


Uploaded on Oct 29, 2011

A short walkthrought of the Eclipse Software Development Kit.

Plugins used in this video:
1. PHPEclipse (http://www.phpeclipse.com/)
2. Aptana Studio (http://www.aptana.com/)
3. Subversive (http://www.eclipse.org/subversive/)

Uploaded on Nov 24, 2011

Tutorial showing installation, requirements and configuration of Eclipse itself and the PHPEclipse plug-in.

Link mentioned in the video regarding line endings: http://www.evolt.org/node/60247 (scroll to Linefeeds part)

Published on Mar 16, 2013

A short tutorial outlining the features of PHPEclipse.

 

Published on Mar 22, 2013

A quick walkthrough on all the goodies Aptana plugin for Eclipse provides when editing HTML, CSS and JavaScript code.

Link about Java 7 and FTP problems on Windows 7+ mentioned in the video: http://stackoverflow.com/questions/69…

 

Published on Apr 3, 2013

Quick tips and tricks to help you effectively tackle the most redundant activities during development – including extra safeguard tip using the Local History.

 

Published on May 10, 2013

Presentation of 2 ways I know of to work with FTP and synchronization in Eclipse:

1. utilizing Aptana’s remote synchronization (http://www.aptana.com)
2. using the not-yet-so-deprecated FTP and WebDav Eclipse plugin (http://jcraft.com, http://eclipse.jcraft.com)

Published on May 26, 2013

Quick introduction to remote versioning systems with a peek into Eclipse’s SVN interface and TortoiseSVN program.

Link to SourceForge: https://sourceforge.net/
Link to GitHub: https://github.com/
Link to the Timeline: Inventions project: https://sourceforge.net/projects/time…

Unity launchers

Unity Launchers are actually files stored in your computer, with a ‘.desktop’ extension. In earlier Ubuntu versions, these files were simply used so as to launch a specific application, but in Unity they are also used so as to create right-click menus for each application, which you can access from the Unity Launcher.

This article describes how to create a working .desktop file for general use, but also how to add it to the Unity Launcher and/or how to edit a Unity Launcher itself, by editing its fields or by adding a right-click menu to it.

 

Creating a working .desktop file

There are currently 2 ways of creating a desktop file. The 1st one is using a text editor, like Gedit, and the 2nd one is installing a program (gnome-panel) or using ‘alacarte’ that both do the job for you. The former lets you “control” your launcher more than the latter, but the latter way is easier. Please note that this section will cover only the basics, not how to add shortcuts to your launcher. For this, please head to Adding shortcuts to a launcher.

Using a text editor

Open your favourite text editor, like Gedit or nano, and type in (copy and paste):

[Desktop Entry]
Version=x.y
Name=ProgramName
Comment=This is my comment
Exec=/home/alex/Documents/exec.sh
Icon=/home/alex/Pictures/icon.png
Terminal=false
Type=Application
Categories=Utility;Application;

These lines are enough for describing a simple launcher. Each launcher (.desktop file) consists of some basic fields.

  • Version is the version of this .desktop file.
  • Name is the name of the application, like ‘VLC media player’.
  • Comment is a phrase or two describing what this program does, like ‘Plays your music and videos files’.
  • Exec is the path to the executable file. The full path to the executable file must be used only in case it isn’t in any of the paths specified in the $PATH variable. For example, any files that are inside the path /usr/bin don’t need to have their full path specified in the Exec field, but only their filename. To see all the paths in the $PATH variable you can open a terminal using Ctrl+Alt+T and type in
    echo $PATH
  • Icon field is the icon that should be used by the launcher and represents the application. All icons that are under the directory /usr/share/pixmaps don’t need to have their full path specified, but their filename without the extension. For example, if the icon file is /usr/share/pixmaps/wallch.png, then the Icon field should be just ‘wallch’. All other icons should have their full path specified.
  • Terminal field specifies whether the application should run in a terminal window or not.
  • Type field specifies the type of the launcher file. The type can be Application, Link or Directory, but this article covers the ‘Application’ type.
  • Categories field specifies the category of the application. It is used by the Dash so as to categorize the applications. A launcher being a ‘Utility;Application;’ should be under the ‘Accessories’ section etc.

A realistic example of how a .desktop file looks like is the following:

[Desktop Entry]
Version=1.0
Name=BackMeUp
Comment=Back up your data with one click
Exec=/home/alex/Documents/backup.sh
Icon=/home/alex/Pictures/backup.png
Terminal=false
Type=Application
Categories=Utility;Application;

One last thing to add is that by setting executable rights to your .desktop file, it automatically takes the specified Icon and Name (specified in the corresponding fields), as it should be. Be careful though, the filename doesn’t really change, it still remains ‘launcher_name_here.desktop’ and not ‘Name_field_here’, the system chooses to display it like ‘Name_field_here’ because it’s nicer without the .desktop extension.

Adding a .desktop file to the Unity Launcher

In order to add your launcher to the Unity Launcher on the left, you have to place your .desktop file at /usr/share/applications/ or at ~/.local/share/applications/. After moving your file there, search for it in the Dash (Windows key -> type the name of the application) and drag and drop it to the Unity Launcher. Now your launcher (.desktop file) is locked on the Unity Launcher! If your desktop file cannot be found by doing a search from the Dash, you may need to read on…

To be more certain that your .desktop file will work properly, use the desktop file validator, which will notify you of any errors or omissions. If there are no errors, desktop-file-validator will exit silently.

Once the file validates correctly, install it to the default location (probably /usr/share/applications) using the desktop-file-install program. This step may require superuser privileges. The desktop-file-install program may add some lines of its own to your .desktop file. There is no need to have the .desktop file be executable by anyone.

Please note that desktop-file-validate tends to be oversensitive at times, which means that it can output error messages on perfectly working .desktop files. Those error messages should be better seen as warnings rather than anything else. For more information on desktop entry specification please refer to http://standards.freedesktop.org/desktop-entry-spec/latest/

perl

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.[4]

Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier.[7] Since then, it has undergone many changes and revisions. The latest major stable revision of Perl 5 is 5.18, released in May 2013. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another.

The Perl languages borrow features from other programming languages including C, shell scripting (sh), AWK, and sed.[8] They provide powerful text processing facilities without the arbitrary data-length limits of many contemporary Unix tools,[9] facilitating easy manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its parsing abilities.[10]

In addition to CGI, Perl 5 is used for graphics programming, system administration, network programming, finance, bioinformatics, and other applications. It’s nicknamed “the Swiss Army chainsaw of scripting languages” because of its flexibility and power,[11] and possibly also because of its perceived “ugliness”.[12] In 1998, it was also referred to as the “duct tape that holds the Internet together”, in reference to its ubiquity and perceived inelegance.[13]

Perl was originally named “Pearl”. Wall wanted to give the language a short name with positive connotations; he claims that he considered (and rejected) every three- and four-letter word in the dictionary. He also considered naming it after his wife Gloria. Wall discovered the existing PEARL programming language before Perl’s official release and changed the spelling of the name.[36]

When referring to the language, the name is normally capitalized (Perl) as a proper noun. When referring to the interpreter program itself, the name is often uncapitalized (perl) because most Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl; Randal L. Schwartz, however, capitalized the language’s name in the book to make it stand out better when typeset. This case distinction was subsequently documented as canonical.[37]

There is some contention about the all-caps spelling “PERL”, which the documentation declares incorrect[37] and which some core community members consider a sign of outsiders.[38] The name is occasionally expanded as Practical Extraction and Report Language, but this is a backronym.[39] Other expansions have been suggested as equally canonical, including Wall’s own humorous Pathologically Eclectic Rubbish Lister.[40] Indeed, Wall claims that the name was intended to inspire many different expansions.[41]

The Comprehensive Perl Archive Network (CPAN) currently has 121,260 Perl modules in 27,769 distributions, written by 10,733 authors, mirrored on 270 servers.

The archive has been online since October 1995 and is constantly growing.

CPAN, the Comprehensive Perl Archive Network, is an archive of over 114,000 modules of software written in the Perl programming language, as well as documentation for them.[1] It has a presence on the World Wide Web at www.cpan.org and is mirrored worldwide at more than 200 locations.[2] CPAN can denote either the archive network itself, or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.[3] CPAN was conceived in 1993, and the first web-accessible mirror was launched in January 1997.[4]

Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories.

Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used for building and installing other extension modules; others, like CGI.pm, are merely commonly used. The authors of Perl do not expect this limited group to meet every need, however.

The CPAN’s main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose is common. There is no formal bug tracking system, but there is a third-party bug tracking system that CPAN designated as the suggested official method of reporting issues with modules. Continuous development on modules is rare; many are abandoned by their authors, or go years between new versions being released. Sometimes a maintainer will be appointed to an abandoned module. They can release new versions of the module, and accept patches from the community to the module as their time permits. CPAN has no revision control system, although the source for the modules is often stored on GitHub. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot.

The CPAN is an important resource for the professional Perl programmer. With over 23,000 modules (containing 20,000,000 lines of code) as of July 2011, the CPAN can save programmers weeks of time, and large Perl programs often make use of dozens of modules. Some of them, such as the DBI family of modules used for interfacing with SQL databases, are nearly irreplaceable in their area of function; others, such as the List::Util module, are simply handy resources containing a few common functions.

Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE).

In 2003, distributions started to include metadata files, called META.yml, indicating the distribution’s name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE’s software will usually try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable.

With thousands of distributions, CPAN needs to be structured to be useful. Distributions on the CPAN are divided into 24 broad chapters based on their purpose, such as Internationalization and Locale; Archiving, Compression, And Conversion; and Mail and Usenet News. Distributions can also be browsed by author. Finally, the natural hierarchy of Perl module names (such as “Apache::DBI” or “Lingua::EN::Inflect”) can sometimes be used to browse modules in the CPAN.

CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module’s name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced, so virtually all distribution names do include a version number.

There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called cpan is also provided in the Perl core, and is the usual way of running CPAN.pm. After a short configuration process and mirror selection, it uses tools available on the user’s computer to automatically download, unpack, compile, test, and install modules. It is also capable of updating itself.

More recently, an effort to replace CPAN.pm with something cleaner and more modern has resulted in the CPANPLUS (or CPAN++) set of modules. CPANPLUS separates the back-end work of downloading, compiling, and installing modules from the interactive shell used to issue commands. It also supports several advanced features, such as cryptographic signature checking and test result reporting. Finally, CPANPLUS can uninstall a distribution. CPANPLUS was added to the Perl core in version 5.10.0.

Both modules can check a distribution’s dependencies and can be set to recursively install any prerequisites, either automatically or with individual user approval. Both support FTP and HTTP and can work through firewalls and proxies.

Install all dependent packages for CPAN

sudo apt-get install build-essential

Invoke the cpan command as a normal user

cpan

Once you hit on enter for “cpan” to execute, you be asked of some few questions. To make it simple for yourself, answer “no” for the first question so that the latter ones will be done for you automatically.

Enter the commands below

make install
install Bundle::CPAN

Now all is set and you can install any perl module you want.

Type o conf init to reconfigure cpan.

The Best Perl Programmers Use Modern Perl

by chromatic

In 1987, Perl 1.0 changed the world. In the decades since then, the language has grown from a simple tool for system administration somewhere between shell scripting and C programming to a powerful, general purpose language steeped in a rich heritage.

Even so, most Perl 5 programs in the world take far too little advantage of the language. You can write Perl 5 programs as if they were Perl 4 programs (or Perl 3 or 2 or 1), but programs written to take advantage of everything amazing the worldwide Perl 5 community has invented, polished, and discovered are shorter, faster, more powerful, and easier to maintain than their alternatives.

They solve difficult problems with speed and elegance. They take advantage of the CPAN and its unparalleled library of reusable code. They get things done.

This productivity can be yours, whether you’ve dabbled with Perl for a decade or someone just handed you this book and said “Fix this code by Friday.”

Modern Perl is suitable for programmers of every level. It’s more than a Perl tutorial—only Modern Perl focuses on Perl 5.12 and 5.14, to demonstrate the latest and most effective time-saving features. Only Modern Perl explains how and why the language works, to let you unlock the full power of Perl.

Hone your skills. Sharpen your knowledge of the tools and techniques that make Perl so effective. Master everything Perl has to offer.

When you have to solve a problem now, reach for Perl. When you have to solve a problem right, reach for Modern Perl.

Visit the companion website at Modern Perl Books or read Modern Perl: the Book online.

Modern Perl installations include two clients to connect to, search, download, build, test, and install CPAN distributions, CPAN.pm and CPANPLUS. For the most part, each of these clients is equivalent for basic installation. This book recommends the use of CPAN.pm solely due to its ubiquity. With a recent version (as of this writing, 1.9800 is the latest stable release), module installation is reasonably easy. Start the client with:

    $ cpan

To install a distribution within the client:

    $ cpan
    cpan[1]> install Modern::Perl

… or to install directly from the command line:

    $ cpan Modern::Perl

Eric Wilhelm’s tutorial on configuring CPAN.pm http://learnperl.scratchcomputing.com/tutorials/configuration/ includes a great troubleshooting section.

cURL

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols. The cURL project produces two products, libcurl and cURL. It was first released in 1997.

curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos…), file transfer resume, proxy tunneling and a busload of other useful tricks.

Working with HTTP from the command-line is a valuable skill for HTTP architects and API designers to have. The cURL library and curl command give you the ability to design a Request, put it on the pipe, and explore the Response. The downside to the power of curl is how much breadth its options cover. Running curl --help spits out 150 different flags and options. This article demonstrates nine basic, real-world applications of curl.

In this tutorial we’ll use the httpkit echo service as our end point. The echo server’s Response is a JSON representation of the HTTP request it receives.

Make a Request

Let’s start with the simplest curl command possible.

Request
curl http://echo.httpkit.com
Response
{
  "method": "GET",
  "uri": "/",
  "path": {
    "name": "/",
    "query": "",
    "params": {}
  },
  "headers": {
    "host": "echo.httpkit.com",
    "user-agent": "curl/7.24.0 ...",
    "accept": "*/*"
  },
  "body": null,
  "ip": "28.169.144.35",
  "powered-by": "http://httpkit.com",
  "docs": "http://httpkit.com/echo"
}

Just like that we have used curl to make an HTTP Request. The method, or “verb”, curl uses, by default, is GET. The resource, or “noun”, we are requestion is addressed by the URL pointing to the httpkit echo service, http://echo.httpkit.com.

You can add path and query string parameters right to the URL.

Request
curl http://echo.httpkit.com/path?query=string
Response
{ ...
  "uri": "/path?query=string",
  "path": {
    "name": "/path",
    "query": "?query=string",
    "params": {
      "query": "string"
    }
  }, ...
}

Set the Request Method

The curl default HTTP method, GET, can be set to any method you would like using the -X option. The usual suspects POST, PUT, DELETE, and even custom methods, can be specified.

Request
curl -X POST echo.httpkit.com
Response
{
    "method": "POST",
    ...
}

As you can see, the http:// protocol prefix can be dropped with curl because it is assumed by default. Let’s give DELETE a try, too.

Request
curl -X DELETE echo.httpkit.com
Response
{
    "method": "DELETE",
    ...
}

Set Request Headers

Request headers allow clients to provide servers with meta information about things such as authorization, capabilities, and body content-type. OAuth2 uses an Authorization header to pass access tokens, for example. Custom headers are set in curl using the -H option.

Request
curl -H "Authorization: OAuth 2c4419d1aabeec" 
     http://echo.httpkit.com
Response
{...
"headers": {
    "host": "echo.httpkit.com",
    "authorization": "OAuth 2c4419d1aabeec",
  ...},
...}

Multiple headers can be set by using the -H option multiple times.

Request
curl -H "Accept: application/json" 
     -H "Authorization: OAuth 2c3455d1aeffc" 
     http://echo.httpkit.com
Response
{ ...
  "headers": { ...
    "host": "echo.httpkit.com",
    "accept": "application/json",
    "authorization": "OAuth 2c3455d1aeffc" 
   }, ...
}

Send a Request Body

Many popular HTTP APIs today POST and PUT resources using application/json or application/xml rather than in an HTML form data. Let’s try PUTing some JSON data to the server.

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d '{"firstName":"Kris", "lastName":"Jordan"}'
     echo.httpkit.com
Response
{
   "method": "PUT", ...
   "headers": { ...
     "content-type": "application/json",
     "content-length": "40"
   },
   "body": "{"firstName":"Kris","lastName":"Jordan"}",
   ...
 }

Use a File as a Request Body

Escaping JSON/XML at the command line can be a pain and sometimes the body payloads are large files. Luckily, cURL’s @readfile macro makes it easy to read in the contents of a file. If we had the above example’s JSON in a file named “example.json” we could have run it like this, instead:

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d @example.json
     echo.httpkit.com

POST HTML Form Data

Being able to set a custom method, like POST, is of little use if we can’t also send a request body with data. Perhaps we are testing the submission of an HTML form. Using the -d option we can specify URL encoded field names and values.

Request
curl -d "firstName=Kris" 
     -d "lastName=Jordan" 
     echo.httpkit.com
Response
{
  "method": "POST", ...
  "headers": {
    "content-length": "30",
    "content-type":"application/x-www-form-urlencoded"
  },
  "body": "firstName=Kris&lastName=Jordan", ...
}

Notice the method is POST even though we did not specify it. When curl sees form field data it assumes POST. You can override the method using the -X flag discussed above. The “Content-Type” header is also automatically set to “application/x-www-form-urlencoded” so that the web server knows how to parse the content. Finally, the request body is composed by URL encoding each of the form fields.

POST HTML Multipart / File Forms

What about HTML forms with file uploads? As you know from writing HTML file upload form, these use a multipart/form-data Content-Type, with the enctype attribute in HTML. In cURL we can pair the -F option and the @readFile macro covered above.

Request
curl -F "firstName=Kris" 
     -F "publicKey=@idrsa.pub;type=text/plain" 
     echo.httpkit.com
Response
{
  "method": "POST",
  ...
  "headers": {
    "content-length": "697",
    "content-type": "multipart/form-data;
    boundary=----------------------------488327019409",
    ... },
  "body": "------------------------------488327019409rn
           Content-Disposition: form-data;
           name="firstName"rnrn
           Krisrn
           ------------------------------488327019409rn
           Content-Disposition: form-data;
           name="publicKey";
           filename="id_rsa.pub"rn
           Content-Type: text/plainrnrn
           ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAkq1lZYUOJH2
           ... more [a-zA-Z0-9]* ...
           naZXJw== krisjordan@gmail.comnrn
           ------------------------------488327019409
           --rn",
...}

Like with the -d flag, when using -F curl will automatically default to the POST method, the multipart/form-data content-type header, calculate length, and compose the multipart body for you. Notice how the @readFile macro will read the contents of a file into any string, it’s not just a standalone operator. The “;text/plain” specifies the MIME content-type of the file. Left unspecified, curl will attempt to sniff the content-type for you.

Test Virtual Hosts, Avoid DNS

Testing a virtual host or a caching proxy before modifying DNS and without overriding hosts is useful on occassion. With cURL just point the request at your host’s IP address and override the default Host header cURL sets up.

Request
curl -H "Host: google.com" 50.112.251.120
Response
{
  "method": "GET", ...
  "headers": {
    "host": "google.com", ...
  }, ...
}

View Response Headers

APIs are increasingly making use of response headers to provide information on authorization, rate limiting, caching, etc. With cURL you can view the headers and the body using the -i flag.

Request
curl -i echo.httpkit.com 
Response
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Wed, 29 Aug 2012 04:18:19 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 391
Connection: keep-alive
X-Powered-By: http://httpkit.com

{
  "method": "GET",
  "uri": "/", ...
}

Shameless plug: Do you hack on REST API integrations or implementations? Wiretap is an HTTP debugger you can use to see every request and response between any client and HTTP API in real time. It’s entering private beta soon. Help test it!

on an Ubuntu system (probably Debian too)

$ sudo apt-get install php5-curl

The basic idea behind the cURL functions is that you initialize a cURL session using the curl_init(), then you can set all your options for the transfer via the curl_setopt(), then you can execute the session with the curl_exec() and then you finish off your session using the curl_close(). Here is an example that uses the cURL functions to fetch the example.com homepage into a file:

<?php

$ch = curl_init("http://example.iana.org/");
$fp = fopen("example_homepage.txt", "w");

curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);

curl_exec($ch);
curl_close($ch);
fclose($fp);
?>