Warning: The magic method RestrictCategories::__wakeup() must have public visibility in /var/www/html/blog/wp-content/plugins/restrict-categories/restrict-categories.php on line 59

Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/blog/wp-content/plugins/restrict-categories/restrict-categories.php:59) in /var/www/html/blog/wp-includes/feed-rss2.php on line 8
Page not found – mablog https://abendstille.at/blog the blog of ma Mon, 10 Sep 2018 11:45:12 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 18365574 List all entity classes managed by Doctrine in Symfony2 controller https://abendstille.at/blog/?p=163 https://abendstille.at/blog/?p=163#comments Sun, 25 Aug 2013 15:25:16 +0000 http://www.abendstille.at/blog/?p=163 In a recent project (virtualidentityag/hydra-twitter) i tried to create a Symfony2 form field that lets you choose from the list of entities managed by doctrine (so that the user could decide which entity he wants to map a REST-resultset to). I thought that there was no easy way doing this. Heres how i found a solution.

My first guess was to kind of reflect the schema and found that there exists a SchemaManager. Unfortunately creating a schema using the schema manager was not helping, i could not derive the class names of the entities. However i was able to find a list of tables that are currently managed (i am working inside a symfony2 controller):

$em = $this->getDoctrine()->getManager();
$tables = $em->getConnection()->getSchemaManager()->listTables();

The tables are all instances of Doctrine\DBAL\Schema\Table. You cannot derive the entity class from the table directly because doctrine does not work that way. Doctrine uses a Hydrator to “deserialize” the result from the database to your own entities.

My second guess was to search Doctrine for functions that would return the full qualified class name for a Table instance. Unfortunately this was not possible. In fact i have to admit that i did not fully understand how doctrine resolves the names that you use in DQL-statements because the dql-parsing code of Doctrine is quite complicated.

As i remembered from Symfony2 example code you do not have to provide the FQCN in a DQL query – it suffices to write “YourBundle:YourEntity”. Doctrine will then automatically find your Entity-class name by getting the namespace for your bundle, supposing you have a sub-namespace called “Entity” and then construct the class name from that coding convention.

But my code should not be dependent from bundle names or any bundle at all. So hard coding this into my code (and following the Symfony/Doctrine convention) was no option for this situation.

On further investigation of Doctrines code i found a way to list all namespaces that are searched for entities using the configuration-object of Doctrine:

$em = $this->getDoctrine()->getManager();
$namespaces = $em->getConfiguration()->getEntityNamespaces();

From here on the next step was easy although dirty and i didn’t like what i’ve done here:

// warning: don't use that!
$entities = array();
$em = $this->getDoctrine()->getManager()
$tables = $em->getConnection()->getSchemaManager()->listTables();
$namespaces = $em->getConfiguration()->getEntityNamespaces();
foreach ($tables as $table) {
    foreach ($namespaces as $namespace) {
        if (class_exists($namespace.'\\'.$table->getName())) {
            break;
        }
    }
    $entities[] = $namespace.'\\'.$table->getName();
}

I showed the code to a friend of mine (@traktorjosh) and he pointed out, that my code would stop working the moment somebody decides to give his table a different name using the Table-annotation: @Table(name=”notWorking”).

Back to start. I was frustrated. Then i decided to take a look at the UpdateCommand that is called when you execute “app/console doctrine:schema:update”. And there i found what i desperately was looking for: the getAllMetadata-method. Who thought it might hide here? The method returns a list of ClassMetadata objects which let you “Get fully-qualified class name of this persistent class”. So the final solution was straight forward:

$entities = array();
$em = $this->getDoctrine()->getManager();
$meta = $em->getMetadataFactory()->getAllMetadata();
foreach ($meta as $m) {
    $entities[] = $m->getName();
}

Finally it looks like i have found a clean method listing all entities. Again hours of headache for 4 lines of code. If you have any suggestions how to make this better please let me know.

]]>
https://abendstille.at/blog/?feed=rss2&p=163 7 163
Backup a Drupal instance on hetzner linux server https://abendstille.at/blog/?p=150 https://abendstille.at/blog/?p=150#comments Wed, 10 Jul 2013 16:49:04 +0000 http://www.abendstille.at/blog/?p=150 Ever wondered how to efficiently backing up a drupal installation on a hetzner – or more generally – on a debian linux server with a secondary backup server? Here is my solution.

Problem description

  • 2 servers (one running debian with the live system that should be backed up)
  • mysql must be backed up
  • file system must be backed up
  • second server (backup-server) has no shell-access, but SFTP

Solution

  • write script to create compressed file-backup (daily, weekly, monthly)
  • mount backup-server via samba/cifs
  • use automysqlbackup to create mysql-backup
  • use rsync to sync mysql-backup to the backup-server

How

File Backup

Since rsync does not work directly with my backup-server and google did not come up with usable results for my search requests (ok maybe i was too stupid to google or to eager to write this script myself) i wrote a script with the following requirements:

  • keep daily backups for a week
  • keep weekly backups for a month
  • keep monthly backups for a year
  • compress the backup
  • copy it via sftp

SFTP problems

As soon as i started to write the script mentioned above i realized that this would not work out easily because normally every secure connection requires a password that sometimes cannot be stored. But in this case there is a solution. Hetzner themselves wrote a good how to. Only thing they missed in their guide is that you must create the ssh-key without password (i hate things without password and thought the password would be stored in a secure place or something, which was a silly assumption looking back). For completeness i include the commands i executed:

# enter an EMPTY password when generating this. if there already
# exists a key-file, make sure you have no existing
# sftp-connections without password
ssh-keygen
ssh-keygen -e -f .ssh/id_rsa.pub | grep -v "Comment:" > .ssh/id_rsa_rfc.pub

# executing the following commands you will be asked for the
# remote passwort, but this should be the last time!
echo "mkdir .ssh" | sftp u15000@u15000.your-backup.de
echo "put .ssh/id_rsa_rfc.pub .ssh/authorized_keys" | sftp u15000@u15000.your-backup.de

Testing

Now you should be able to connect to your backup-server without password using sftp:

sftp u15000@u15000.your-backup.de

Troubleshooting

  • make sure you created the key without password
  • adjust the remote-server adress correctly
  • the .ssh folder might not exist on the remote server (you might have to create it first)
  • the file authorized_keys must follow RFC4716. This means there must not be a comment-line in it.

The file-backup-script

The following backup-script requires a password-less sftp connection, so be sure you followed the steps above. I created the file-backup-script in /usr/local/bin and added it to crontab.

nano /usr/local/bin/autofilebackup
chmod a+x /usr/local/bin/autofilebackup

Furthermore i created a folder named “filebackup” on my backup-server directly in the /-folder of a new sftp connection.

#!/bin/bash

# configuration
LOCAL_BACKUP_PATH='/var/www/drupal.live'
REMOTE_BACKUP_PATH='/filebackup' # remote path where backups will be stored. make sure the subfolders "daily" "weekly" and "monthly" exist!
REMOTE_SERVER=u15000@u15000.your-backup.de

# initialize variables
dc=`date +'%s'`
BACKUP_FILE="live-"
BACKUP_FILE=$BACKUP_FILE`date +"%Y-%m-%d"`
BACKUP_FILE="$BACKUP_FILE.tar.bz2"

# create daily backup
tar -cjf /tmp/live-`date +"%Y-%m-%d"`.tar.bz2 $LOCAL_BACKUP_PATH
echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/daily/$BACKUP_FILE" | sftp $REMOTE_SERVER

# rotate daily backups (delete backups older than a week)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/daily" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 691200` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create weekly backup if sunday
if [ `date +%u` -eq 7 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/weekly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi

# rotate weekly backups (delete backups older than a month)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/weekly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 2678400` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create monthly backup if 1st of month
if [ `date +%e` -eq 1 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/monthly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi


# rotate monthly backups (delete backups older than a year)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/monthly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 31536000` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# clean up local backup
rm /tmp/$BACKUP_FILE

Configuration

There are three variables that can be configured:

  1. LOCAL_BACKUP_PATH – this is where your drupal installation (or whatever you want to backup) lies. As this is the source argument to tar you can add multiple paths here separated by a space.
  2. REMOTE_BACKUP_PATH – this is where your compressed backup will be saved on the backupserver relatively to the root-folder of a new sftp-connection. The script DOES NOT create the folder automatically!
  3. REMOTE_SERVER – these are the connection details for the sftp conncetion

Testing

Execute the above script and depending on how big your folder is the script should be finished sooner or later. You should see the output of the put command and how fast the backup was transfered.

MySQL Backup

For exporting the database to the filesystem where i can copy the backups to the backup server i use automysqlbackup. I did not find an official how to install automysqlbackup correctly but in fact its straight forward:

  • download it from sourceforge to your server
  • extract it
  • move the executable to /usr/local/bin
  • move the configuration file to /etc/automysqlbackup
  • insert backup-destination and username and password (and the databases you want to backup) into the configuration file

I for myself like it to have a local and a remote backup. So i used /var/backups/db as destination for automysqlbackup. Now you should be able to run the mysql backup:

/usr/local/bin/automysqlbackup
ls /var/backups/db

The trouble with rsync – mount backup-server via SAMBA/CIFS

rsync does not support SFTP as protocol so i had to use a trick i do not really like but what choice did i have? I created a local mountpoint for the backup-server and used rsync locally.

Unfortunately mounting a remote server means that you would have to mount it manually every time your server reboots. This is not what we want, so we also have to register the mount in /etc/fstab.

Also on my debian installation mount.cifs was not installed so i had to install it first using apt-get.

apt-get install cifs-utils
nano /etc/fstab

The lines i added in this file are the following:

# /mnt/backup for backup-server and rsync of mysqlfiles
//u15000.your-backup.de/backup /mnt/backup cifs iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=0,gid=0,file_mode=0660,dir_mode=0770 0 0

This is also from the hetzner guide.

The file /etc/backup-credentials.txt (mode 0600) has the following content (oh we all love passwords stored plaintext yeah):

username=USERNAME
password=PASSWORD

Putting it all together

Now we are ready to install our crontab scripts:

EDITOR=nano crontab -e

I added the following lines:

0 4 * * * /usr/local/bin/automysqlbackup
30 4 * * * rsync -rtv --delete /var/backups/db/ /mnt/backup/mysqlbackup/
0 5 * * * /usr/local/bin/autofilebackup

You see that i give every process 30 minutes time to execute. This might be paranoid and you might to decide differently. Another problem i want to point out here is consistency. You won’t get a coherent db-backup and filesystem using this method. To achieve this you would have to set your website to maintenance mode using drush execute the backup scripts as fast as possible and then end maintenance mode.

Of corse for bigger websites this is no solution. You would create a redundant system (using mysql-replication) and run backup the filesystem using virtual machines and snapshots. VMware describes and offers such solutions but there are also others.

A good guide about rsync is this article.

Please let me know if you have any troubles or suggestions!

]]>
https://abendstille.at/blog/?feed=rss2&p=150 1 150
SSH2 Extension for MAMP on MacOS X https://abendstille.at/blog/?p=144 https://abendstille.at/blog/?p=144#respond Thu, 13 Jun 2013 14:19:58 +0000 http://www.abendstille.at/blog/?p=144 If you want to use SSH2 with PHP (say you built a custom deployment system) you need the SSH2-extension for php. Here is how you get it to work using MAMP on OSX.

I recently wrote an article on how you get the autotools installed on osx. Do this first – you ned autoconf, automake and libtools.

Then installing the SSH2 extension is easy as pie:

  1. Install the autotools
  2. Install libssh (as root)
  3. build the ssh2-extension using pecl (you have to give pecl the absolute path, because ssh2 is still beta)

Here is how this looked like on my cli:

cd /usr/local/src
sudo bash
curl -OL http://www.libssh2.org/download/libssh2-1.4.3.tar.gz
tar xzf libssh2-1.4.3.tar.gz
cd libssh2-1.4.3
./configure --prefix=/usr/local/
make
make install
exit
cd /Applications/MAMP/bin/php/php5.3.6/
./bin/pecl install channel://pecl.php.net/ssh2-0.12

Now the only thing remaining is enabling the module in your php.ini.

]]>
https://abendstille.at/blog/?feed=rss2&p=144 0 144
Intl Extension for MAMP on MacOS X https://abendstille.at/blog/?p=131 https://abendstille.at/blog/?p=131#comments Tue, 11 Jun 2013 15:48:24 +0000 http://www.abendstille.at/blog/?p=131 Many Symfony bundles require the intl-php-extension. Unfortunately my MAMP-version lacks it. Here is how i got it to work.

First a list of missing things in osx/mamp when adding the intl extension:

  • autoconf/automake/libtool
  • icu (International Components for Unicode)
  • php-source

I am not sure wth came over apple, but in newer versions of XCode they do not include the auto-tools anymore.

My strategy for building the intl-extension was using pecl. I opted for that because it is said to be configuration-less. Can’t believe it? Me neither. But i think its still a somehow convenient way to install php extensions (when you do not find a binary). Also i always install everything into /usr/local because i think think this is the place where things belong. Don’t forget to change the paths to your preferences.

Ok. Back to work.

Build Tools and ICU

install autoconf/automake/libtool/icu (many thanks to Jean-Sebastien) as root (this is system stuff, its ok to do that as root):

sudo bash
cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-2.68.tar.gz
tar xzf autoconf-2.68.tar.gz
cd autoconf-2.68
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/automake/automake-1.11.tar.gz
tar xzf automake-1.11.tar.gz
cd automake-1.11
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://ftpmirror.gnu.org/libtool/libtool-2.4.tar.gz
tar xzf libtool-2.4.tar.gz
cd libtool-2.4
./configure --prefix=/usr/local
make
make install

cd /usr/local/src
curl -OL http://download.icu-project.org/files/icu4c/4.8.1.1/icu4c-4_8_1_1-src.tgz
tar xzf icu4c-4_8_1_1-src.tgz
cd icu/source
./configure --prefix=/usr/local
make
make install

Nice. Now you should have all the basics for compiling apple (purposely?) neglected and icu.

PHP source

mamp does not include the php-source. But – and i definitely like this – MAMP provides the source for customization. Very neat. They call it “MAMP components”. Grab the version you need from sourceforge. I needed 2.0.2 because my MAMP version was 2.0.5.

Follow these steps (adjusting your paths and versions):

  1. download the MAMP-components from sourceforge
  2. unpack them – you get a bunch of zips
  3. locate the PHP-version you are using (i was using 5.3.6 as of writing this post)
  4. locate your MAMP-PHP-folder. Mine was located in /Applications/MAMP/bin/php/php5.3.6
  5. create an include/php folder inside this folder so that the following path is valid: /Applications/MAMP/bin/php/php5.3.6/include/php
  6. unpack the contents of the previously downloaded (mamp-components) php-5.3.6.tar.gz to this very folder.
  7. open a terminal and call the configure routine in this folder: ./configure –prefix=/Applications/MAMP/bin/php/php5.3.6
  8. happy pecl-ing!

ok – i did all this on the command line. so here is how my commands looked like (maybe this helps better than the list above – this time no sudo dudes, this is no system stuff!):

cd ~/Downloads
unzip MAMP_components_2.0.2.zip
cd /Applications/MAMP/bin/php/php5.3.6
mkdir include
tar xzf ~/Downloads/MAMP_components_2.0.2/php-5.3.6.tar.gz -C include
mv include/php-5.3.6 include/php
cd include/php
./configure --prefix=/Applications/MAMP/bin/php/php5.3.6

We now have extended the basic MAMP-version with the php source code.

Finally install Intl-PECL extension

I did this after switching to the corresponding php-version folder.

cd /Applications/MAMP/bin/php/php5.3.6
./bin/pecl install intl

The installation routine asks you for the location of your ICU installation. I answered /usr/local as location. Adjust this to your setup if you used different paths.

Enable the intl-extension in your php.ini by adding extension=intl.so to it. And don’t forget to restart MAMP.

That should have been it. HTH? Tell me in comments or contact me via twitter @gruzilla if you’re having troubles following this post.

Cleaning up

We do not need the files generated in /usr/local/src – but watch out, that you only delete what you created just now. Also the MAMP components in your Downloads folder can obviously be deleted.

Troubleshooting

Some of my friends encountered problems following the above guide. Here are some hints.

1) fatal error: ‘php.h’ file not found

Make sure you installed the MAMP-components correctly. Make sure the file php.h (and all files along with it) are in the correct place. The correct path would be:

/Applications/MAMP/bin/php/php5.3.6/include/php/main/php.h

Of course this might differ if you use another php-version.

2) Unable to detect ICU prefix or … failed. Please verify ICU install prefix and make sure icu-config works.

Now here we have two possibilities: a) you installed ICU in the wrong directory (wrong –prefix building icu) or b) you gave pecl the wrong answer where your ICU libraries are located.

  • The correct prefix is /usr/local
  • The correct answer is either /usr/local or /usr/local/ what ever workes for you

If you have built the ICU libraries in the wrong location (this happens when you give it some other prefix) you might want to uninstall it first by using make uninstall.

3) My PHP binary wont find intl functions although everything else worked!

Ok – there can be multiple reasons to that.

  • you might be using the wrong php-binary: execute which php in your terminal to check which binary is used. You can change this by altering your $PATH environment variable. Do not forget to restart your terminal.
  • you might have changed the wrong php.ini file. execute php -i | grep php.ini to check which php.ini is loaded by your binary and change correct file.
  • the intl.so file might not have been created in the correct location (or building might have failed) – check that the file exists. In my environment the file is located here: /Applications/MAMP/bin/php/php5.3.6/lib/php/extensions/no-debug-non-zts-20090626/intl.so

If you encounter any other problems, please let me know in the comments or contact me directly.

]]>
https://abendstille.at/blog/?feed=rss2&p=131 7 131
Google+ https://abendstille.at/blog/?p=121 https://abendstille.at/blog/?p=121#comments Fri, 08 Jul 2011 00:17:28 +0000 http://www.abendstille.at/blog/?p=121 everybody does it right now so here are my rather ironic and maybe over exaggerated 10 cents to +

just let me say some introductory words: i like +, i’d prefere it to fb, if everybody was on it (not only geeks and fags who think they have to just because its new). you made the point: many features + has are heavily missed features in fb! i somewhere read (not literally) “you use your fb, but you make your +”. + is, for the time being, not a social, but a medial experience to me. i can customize a lot, and that is just purely awesome. my suggestion: let this be your primary rule of concept, because fb, doesn’t let me decide crucial things (don’t know why. money?) and i hate that since feel mature. i tried to think about what is most irritating me about +’s behaviour…

1) want to mute circles.
-> created a “roflwhoareyou”-circle to add people i don’t care about, people i simply do not know (maybe your suggestion-thing did not work properly), still their “totally interesting” posts appear in my stream. this sucks, because i simply don’t want to see them. Now don’t come with “then don’t add them, because then i’d say, ok – just explain me the sense of circles again please.

2) reset circles when writing posts, or let me define the default!
ex:
write message to circle “men”
-> yey! vaginal!

write message to circle “women”
-> oh damn fu, just sent it to “men” because + remembered my last msg went to men 🙁
now everybody thinks i’m gay. shit.

3) want to see combined circles stream
-> i only can see one stream or all streams at once. sucks.

4) want to do a full text search on my (filtered) stream.
-> cannot! oh, sorry i forgot: you don’t have any xp in searching…

5) want to add friends to chat without awaiting their confirmation
-> i’ve already added them to a circle i trust, they have done so too, so why do we have to confirm this again? You give a shit about if i add somebody to a circle, and tell everybody about it. Be consistent: if they don’t want to talk to me, they wont. If they don’t want that i can add them without their confirmation then let them decide

6) generally, +defaults are crap. + is the gentoo of social media networks, so let ME (=ma) decide.
-> if i want that somebody can only add me to chat with my confirmation let MA decide (not you, crappy google!)

7) search twitter hashtags in sparks!
-> sparking zendfw doesn’t find anything about Zend Framework although its a common twitter-hashtag. don’t you search twitter, because its not yours? your whole business is built upon others content, so include that.

8 )  show me my online friends in hangouts!
-> hangouts are awesome, but this would just add the finishing touch. besides youtube vids are sometimes not played synchronously.

9) don’t want to see double posts if someone shared a friends post.
-> make something like “shared by XY”, its really annoying if 4 of my friends shared the same message of one of my friends. not only annoying, this is SPAM! (you know that spam is bad for user xp, dont you? google mail *wink*)

10) cannot find people by [any f** social network]-name
-> come on google? you’re the #1 search engine on teh interwebz! don’t tell me you do not know my twitter friends.

]]>
https://abendstille.at/blog/?feed=rss2&p=121 2 121
Instagram API for PHP https://abendstille.at/blog/?p=107 https://abendstille.at/blog/?p=107#comments Tue, 07 Dec 2010 00:16:28 +0000 http://www.abendstille.at/blog/?p=107 Recently i wondered why instagr.am did not publish its api, especially because i assumed them to already use one. Therefore i created a wlan on my macbook, connected the iphone with it and sniffed some instagram traffic using wireshark. What i found is the following API. Then i started to develop a PHP-library for it at github.

[UPDATE] yesterday @mislav contacted me and linked me his api-documentation which is much nicer than mine 🙂 here you go: https://github.com/mislav/instagram/wiki

The base for all urls is http://instagr.am/api/v1/ – “v1” lets us expect, that they currently develop v2 which will probably be released some time. Normally instagr.am uses gzip as enc-type for communication.

The following list is not complete. You may guess other actions from the url-names, i only tested a few and only some of them made it into my PHP-library.

“pk” stands for primary key i suppose.

Action URL Parameters response
login accounts/login/ username, password, device_id {“status”:”ok”}{“status”:”failed”, “message”:”some error message”}
user details users/[pk]/info/ {“status”: “ok”, “user”: {“username”: “xxx”, “media_count”: 0, “following_count”: 0, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”, “follower_count”: 0, “pk”: pk}}
post comment media/[pk]/comment comment_text {“comment”: {“media_id”: pk, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”:1}
change media data media/configure device_timestamp=0&caption=xxx&location={“name”:”xxx”,”lng”:0,”lat”:0,”external_id”:0, “address”:”xxx”,”external_source”:”foursquare”} &source_type=0&filter_type=15 {“status”: “ok”, “media”: {“image_versions”: [{“url”: “xxx.jpg”, “width”: 150, “type”: 5, “height”: 150}, {“url”: “xxx.jpg”, “width”: 306, “type”: 6, “height”: 306}, {“url”: “http://xxx.jpg”, “width”: 612, “type”: 7, “height”: 612}], “code”: “xxx”, “likers”: [], “taken_at”: 0, “location”: {“external_source”: “foursquare”, “name”: “xxx”, “address”: “xxx”, “lat”: 0, “pk”: pk, “lng”: 0, “external_id”: 0}, “filter_type”: “15”, “device_timestamp”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xx.jpg”, “full_name”: “xxx”}, “media_type”: 1, “lat”: 0, “pk”: 0, “lng”: 0, “comments”: [{“media_id”: 0, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”: 1}]}}
like media media/[pk]/like/ {“status”: “ok”}
upload media media/upload (multipart/form-data) device_timestamp=0 lat=0 lng=0 photo=(binary data)filename=”file” {“status”: “ok”}
show friend details friendships/show/[pk] {“following”: true, “status”: “ok”, “incoming_request”: false, “followed_by”: true, “outgoing_request”: false}
show own feed feed/timeline/? (the last ? may be a number that defines how many elements you want to load, but i haven’t tested it so far) what you get back is very massive (and would be a repetition), therefore only an overview:

{“status”:”ok”, “items”:[— a list of media-items (similar tothe response to changing a media —],”num_results”:0}

show feed of a user feed/user/? (same as above) (same as above)

As i already said, there are definitely more actions that are possible. But i didn’t try everyone. I’m happy with reading from instagr.am, what is what i did at ma instagram.

I hope i can bring this api into a nice form to read in the near future.

Hope that helps someone.

]]>
https://abendstille.at/blog/?feed=rss2&p=107 4 107
diaspora https://abendstille.at/blog/?p=102 https://abendstille.at/blog/?p=102#comments Thu, 16 Sep 2010 23:57:26 +0000 http://www.abendstille.at/blog/?p=102 after installing diaspora yesterday i expected a full-featured fbclone with enhanced privacy settings.

i have to add that i never was a fan of hyped projects, especially if there is much money behind it. for me 200k is so much i even can not imagine it. but i also hate people always talking about money and profit. so i took a look at diaspora as the (maybe typically intended) guy who is able to install and use it.however, after installing diaspora on our ubuntu machine (ok at first it did not work because i still was on hardy, you know, never change a running system and so on) i was dissapointed.

maybe i have to say some words about my background as a software engineer: i worked in a team realizing a software project with +300 features and the budget was much less (more imaginable 😉 ). atm its on production level and is used by several clients.

this is not meant to be a comparison at 1st sight, its meant to show the disappointment other devs might feel after checking out the source. with “other devs” i don’t think of me – our project is situated in a completely different area – i think of the devs of Elgg, Japixx or other projects who did not get the same media response and financial support, but still fight for the same cause: enhancing privacy and trust – at least i dont know any sources to prove that they are not heavily funded, which however may be the case.

the idea of a heavily distributed social network is very nice. also the fact that diaspora does not depend on other protocols or on a server infrastructure is the right way i think to achieve the target.

so what did i like about diaspora? i liked the fact that every url included a hash – no nice urls. if some proxy is in the middle or a link is shared no one can make predictions just on the url. i liked the aspects. i liked the simple user interface (…the parts that worked).

the guys at diaspora definitely put much work into this pre-alpha release. in fact they wrote a very detailed installation guide (which would have worked seamlessly if our server would have been up to date). i really like projects explaining how the program can be installed. a post in the discussion group (currently closed for whatever reason) hit the mark: “installed – now where to go”. the guide simply missed this part!

after installing you have no idea what to do next. play around – ok, but as a dev i still have no idea how it is intended to work – is it a bug, that a friend can only added to one aspect or is it a feature? so: no documentation. now to the core of the hole diaspora-hype – the federated architecture. no documentation that explains how to connect two diaspora seeds. just a quick note in the dev-group to take a look at some seed-files. so i took a look at those files. still no idea how this is intended to work because there are no comments explaining anything.

this is the real reason why i am disappointed, because 1st i could not try out the features of the ui due to bugs (which is ok – it still is a pre-alpha) and 2nd because i could not understand how the core is intended to work (without reading millions of lines of code).

so don’t call it real at the moment. you had much work with what you’ve done, why not add 10 more lines explaining what to do next after installing it?

]]>
https://abendstille.at/blog/?feed=rss2&p=102 1 102
jDownloader lost on secondary screen https://abendstille.at/blog/?p=93 https://abendstille.at/blog/?p=93#comments Mon, 16 Aug 2010 21:23:00 +0000 http://www.abendstille.at/blog/?p=93 ok – this is just a quick post to a very annoying problem i’ve run into several times. you start jdownloader and put it on the secondary monitor. then you disconnect your laptop and jdownloader is lost (its still on the “secondary monitor” living as a ghost between your spaces (at least under mac os x). i searched very long for a solution to this – but w/o success. now i found a solution myself.

at first i thought twiddling around in the system settings might solve the problem: no way

then i hoped to fix it with apple script (tell the app to position the window somewhere else) – no way, its java you *$”$§ not cocoa

i googled for hours but only found other people having the same problem and no solution.

the easiest way to fix this is by far to drag the window to the primary montior ahead of disconnecting the secondary. 101

well – what do you do if you are on hollidays – every one of us desperately needs jdownloader in his/her hollidays don’t we?!

so what to do?!

i searched for a config file, something similar to a plist or whatever.

and then i found it: database.script

cd /Applications/jDownloader.app/Contents/Resources/Java/config
nano database.script

then ctrl+w to find “jdgui” – you will find something like that:

INSERT INTO CONFIG VALUES('jdgui','aced.......

DELETE this line and your done 🙂

(don’t forget to restart jdownloader)

HTH

]]>
https://abendstille.at/blog/?feed=rss2&p=93 9 93
Multitouch Tisch Teil1 https://abendstille.at/blog/?p=64 https://abendstille.at/blog/?p=64#respond Tue, 30 Mar 2010 16:46:37 +0000 http://www.abendstille.at/blog/?p=64 Ich habe heute den ersten Teil meines Multitouch-Tisches fertig gestellt. Jetzt fehlt mir “nur” noch das Befestigen auf Stützen und die Kalibrierung der Laser. Natürlich muss ich auch noch einen Spiegel besorgen um die Rückprojektion zu machen. Aber ich glaube Version 1 wird erstmal eine “hängende” Variante. Mal sehen was einfacher ist. Hier drei Bilder:

Touchfläche, an den Ecken ausgespart um die Laser zu platzieren

Unter der Touchfläche liegt die Kamera, die die Berührungen filmen wird

Ein Laser liegt hier am Eck

]]>
https://abendstille.at/blog/?feed=rss2&p=64 0 64
Wurfaxt https://abendstille.at/blog/?p=60 https://abendstille.at/blog/?p=60#respond Tue, 02 Mar 2010 10:03:57 +0000 http://www.abendstille.at/blog/?p=60 Die vergangene Woche habe ich auf 3mo’s techblog wurfaxt zwei artikel veröffentlicht:

einmal wie man unter osx snow leopard einen diy webserver einrichtet

und das ergebnis der zusammenarbeit mit johannes(morgenstille turboblog) für ein fach der uni in dem es um “geistigen eigentum” (brrrr mir schaudert bei dem wort) geht.

hf beim lesen…

]]>
https://abendstille.at/blog/?feed=rss2&p=60 0 60