My first guess was to kind of reflect the schema and found that there exists a SchemaManager. Unfortunately creating a schema using the schema manager was not helping, i could not derive the class names of the entities. However i was able to find a list of tables that are currently managed (i am working inside a symfony2 controller):
$em = $this->getDoctrine()->getManager(); $tables = $em->getConnection()->getSchemaManager()->listTables();
The tables are all instances of Doctrine\DBAL\Schema\Table. You cannot derive the entity class from the table directly because doctrine does not work that way. Doctrine uses a Hydrator to “deserialize” the result from the database to your own entities.
My second guess was to search Doctrine for functions that would return the full qualified class name for a Table instance. Unfortunately this was not possible. In fact i have to admit that i did not fully understand how doctrine resolves the names that you use in DQL-statements because the dql-parsing code of Doctrine is quite complicated.
As i remembered from Symfony2 example code you do not have to provide the FQCN in a DQL query – it suffices to write “YourBundle:YourEntity”. Doctrine will then automatically find your Entity-class name by getting the namespace for your bundle, supposing you have a sub-namespace called “Entity” and then construct the class name from that coding convention.
But my code should not be dependent from bundle names or any bundle at all. So hard coding this into my code (and following the Symfony/Doctrine convention) was no option for this situation.
On further investigation of Doctrines code i found a way to list all namespaces that are searched for entities using the configuration-object of Doctrine:
$em = $this->getDoctrine()->getManager(); $namespaces = $em->getConfiguration()->getEntityNamespaces();
From here on the next step was easy although dirty and i didn’t like what i’ve done here:
// warning: don't use that! $entities = array(); $em = $this->getDoctrine()->getManager() $tables = $em->getConnection()->getSchemaManager()->listTables(); $namespaces = $em->getConfiguration()->getEntityNamespaces(); foreach ($tables as $table) { foreach ($namespaces as $namespace) { if (class_exists($namespace.'\\'.$table->getName())) { break; } } $entities[] = $namespace.'\\'.$table->getName(); }
I showed the code to a friend of mine (@traktorjosh) and he pointed out, that my code would stop working the moment somebody decides to give his table a different name using the Table-annotation: @Table(name=”notWorking”).
Back to start. I was frustrated. Then i decided to take a look at the UpdateCommand that is called when you execute “app/console doctrine:schema:update”. And there i found what i desperately was looking for: the getAllMetadata-method. Who thought it might hide here? The method returns a list of ClassMetadata objects which let you “Get fully-qualified class name of this persistent class”. So the final solution was straight forward:
$entities = array(); $em = $this->getDoctrine()->getManager(); $meta = $em->getMetadataFactory()->getAllMetadata(); foreach ($meta as $m) { $entities[] = $m->getName(); }
Finally it looks like i have found a clean method listing all entities. Again hours of headache for 4 lines of code. If you have any suggestions how to make this better please let me know.
]]>Since rsync does not work directly with my backup-server and google did not come up with usable results for my search requests (ok maybe i was too stupid to google or to eager to write this script myself) i wrote a script with the following requirements:
As soon as i started to write the script mentioned above i realized that this would not work out easily because normally every secure connection requires a password that sometimes cannot be stored. But in this case there is a solution. Hetzner themselves wrote a good how to. Only thing they missed in their guide is that you must create the ssh-key without password (i hate things without password and thought the password would be stored in a secure place or something, which was a silly assumption looking back). For completeness i include the commands i executed:
# enter an EMPTY password when generating this. if there already # exists a key-file, make sure you have no existing # sftp-connections without password ssh-keygen ssh-keygen -e -f .ssh/id_rsa.pub | grep -v "Comment:" > .ssh/id_rsa_rfc.pub # executing the following commands you will be asked for the # remote passwort, but this should be the last time! echo "mkdir .ssh" | sftp u15000@u15000.your-backup.de echo "put .ssh/id_rsa_rfc.pub .ssh/authorized_keys" | sftp u15000@u15000.your-backup.de
Now you should be able to connect to your backup-server without password using sftp:
sftp u15000@u15000.your-backup.de
The following backup-script requires a password-less sftp connection, so be sure you followed the steps above. I created the file-backup-script in /usr/local/bin and added it to crontab.
nano /usr/local/bin/autofilebackup chmod a+x /usr/local/bin/autofilebackup
Furthermore i created a folder named “filebackup” on my backup-server directly in the /-folder of a new sftp connection.
#!/bin/bash # configuration LOCAL_BACKUP_PATH='/var/www/drupal.live' REMOTE_BACKUP_PATH='/filebackup' # remote path where backups will be stored. make sure the subfolders "daily" "weekly" and "monthly" exist! REMOTE_SERVER=u15000@u15000.your-backup.de # initialize variables dc=`date +'%s'` BACKUP_FILE="live-" BACKUP_FILE=$BACKUP_FILE`date +"%Y-%m-%d"` BACKUP_FILE="$BACKUP_FILE.tar.bz2" # create daily backup tar -cjf /tmp/live-`date +"%Y-%m-%d"`.tar.bz2 $LOCAL_BACKUP_PATH echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/daily/$BACKUP_FILE" | sftp $REMOTE_SERVER # rotate daily backups (delete backups older than a week) c=0 for i in `echo "ls $REMOTE_BACKUP_PATH/daily" | sftp $REMOTE_SERVER` do c=`expr $c + 1` [ $c -le 3 ] && continue d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'` d=`date -d $d +'%s'` echo $i if [ `expr $dc - 691200` -ge $d ] then echo "delete $i" | sftp $REMOTE_SERVER echo 'deleted' fi done # create weekly backup if sunday if [ `date +%u` -eq 7 ] then echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/weekly/$BACKUP_FILE" | sftp $REMOTE_SERVER fi # rotate weekly backups (delete backups older than a month) c=0 for i in `echo "ls $REMOTE_BACKUP_PATH/weekly" | sftp $REMOTE_SERVER` do c=`expr $c + 1` [ $c -le 3 ] && continue d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'` d=`date -d $d +'%s'` echo $i if [ `expr $dc - 2678400` -ge $d ] then echo "delete $i" | sftp $REMOTE_SERVER echo 'deleted' fi done # create monthly backup if 1st of month if [ `date +%e` -eq 1 ] then echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/monthly/$BACKUP_FILE" | sftp $REMOTE_SERVER fi # rotate monthly backups (delete backups older than a year) c=0 for i in `echo "ls $REMOTE_BACKUP_PATH/monthly" | sftp $REMOTE_SERVER` do c=`expr $c + 1` [ $c -le 3 ] && continue d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'` d=`date -d $d +'%s'` echo $i if [ `expr $dc - 31536000` -ge $d ] then echo "delete $i" | sftp $REMOTE_SERVER echo 'deleted' fi done # clean up local backup rm /tmp/$BACKUP_FILE
There are three variables that can be configured:
Execute the above script and depending on how big your folder is the script should be finished sooner or later. You should see the output of the put command and how fast the backup was transfered.
For exporting the database to the filesystem where i can copy the backups to the backup server i use automysqlbackup. I did not find an official how to install automysqlbackup correctly but in fact its straight forward:
I for myself like it to have a local and a remote backup. So i used /var/backups/db as destination for automysqlbackup. Now you should be able to run the mysql backup:
/usr/local/bin/automysqlbackup ls /var/backups/db
rsync does not support SFTP as protocol so i had to use a trick i do not really like but what choice did i have? I created a local mountpoint for the backup-server and used rsync locally.
Unfortunately mounting a remote server means that you would have to mount it manually every time your server reboots. This is not what we want, so we also have to register the mount in /etc/fstab.
Also on my debian installation mount.cifs was not installed so i had to install it first using apt-get.
apt-get install cifs-utils nano /etc/fstab
The lines i added in this file are the following:
# /mnt/backup for backup-server and rsync of mysqlfiles //u15000.your-backup.de/backup /mnt/backup cifs iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=0,gid=0,file_mode=0660,dir_mode=0770 0 0
This is also from the hetzner guide.
The file /etc/backup-credentials.txt (mode 0600) has the following content (oh we all love passwords stored plaintext yeah):
username=USERNAME password=PASSWORD
Now we are ready to install our crontab scripts:
EDITOR=nano crontab -e
I added the following lines:
0 4 * * * /usr/local/bin/automysqlbackup 30 4 * * * rsync -rtv --delete /var/backups/db/ /mnt/backup/mysqlbackup/ 0 5 * * * /usr/local/bin/autofilebackup
You see that i give every process 30 minutes time to execute. This might be paranoid and you might to decide differently. Another problem i want to point out here is consistency. You won’t get a coherent db-backup and filesystem using this method. To achieve this you would have to set your website to maintenance mode using drush execute the backup scripts as fast as possible and then end maintenance mode.
Of corse for bigger websites this is no solution. You would create a redundant system (using mysql-replication) and run backup the filesystem using virtual machines and snapshots. VMware describes and offers such solutions but there are also others.
A good guide about rsync is this article.
Please let me know if you have any troubles or suggestions!
]]>I recently wrote an article on how you get the autotools installed on osx. Do this first – you ned autoconf, automake and libtools.
Then installing the SSH2 extension is easy as pie:
Here is how this looked like on my cli:
cd /usr/local/src sudo bash curl -OL http://www.libssh2.org/download/libssh2-1.4.3.tar.gz tar xzf libssh2-1.4.3.tar.gz cd libssh2-1.4.3 ./configure --prefix=/usr/local/ make make install exit cd /Applications/MAMP/bin/php/php5.3.6/ ./bin/pecl install channel://pecl.php.net/ssh2-0.12
Now the only thing remaining is enabling the module in your php.ini.
]]>
First a list of missing things in osx/mamp when adding the intl extension:
I am not sure wth came over apple, but in newer versions of XCode they do not include the auto-tools anymore.
My strategy for building the intl-extension was using pecl. I opted for that because it is said to be configuration-less. Can’t believe it? Me neither. But i think its still a somehow convenient way to install php extensions (when you do not find a binary). Also i always install everything into /usr/local because i think think this is the place where things belong. Don’t forget to change the paths to your preferences.
Ok. Back to work.
install autoconf/automake/libtool/icu (many thanks to Jean-Sebastien) as root (this is system stuff, its ok to do that as root):
sudo bash cd /usr/local/src curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-2.68.tar.gz tar xzf autoconf-2.68.tar.gz cd autoconf-2.68 ./configure --prefix=/usr/local make make install cd /usr/local/src curl -OL http://ftpmirror.gnu.org/automake/automake-1.11.tar.gz tar xzf automake-1.11.tar.gz cd automake-1.11 ./configure --prefix=/usr/local make make install cd /usr/local/src curl -OL http://ftpmirror.gnu.org/libtool/libtool-2.4.tar.gz tar xzf libtool-2.4.tar.gz cd libtool-2.4 ./configure --prefix=/usr/local make make install cd /usr/local/src curl -OL http://download.icu-project.org/files/icu4c/4.8.1.1/icu4c-4_8_1_1-src.tgz tar xzf icu4c-4_8_1_1-src.tgz cd icu/source ./configure --prefix=/usr/local make make install
Nice. Now you should have all the basics for compiling apple (purposely?) neglected and icu.
mamp does not include the php-source. But – and i definitely like this – MAMP provides the source for customization. Very neat. They call it “MAMP components”. Grab the version you need from sourceforge. I needed 2.0.2 because my MAMP version was 2.0.5.
Follow these steps (adjusting your paths and versions):
ok – i did all this on the command line. so here is how my commands looked like (maybe this helps better than the list above – this time no sudo dudes, this is no system stuff!):
cd ~/Downloads unzip MAMP_components_2.0.2.zip cd /Applications/MAMP/bin/php/php5.3.6 mkdir include tar xzf ~/Downloads/MAMP_components_2.0.2/php-5.3.6.tar.gz -C include mv include/php-5.3.6 include/php cd include/php ./configure --prefix=/Applications/MAMP/bin/php/php5.3.6
We now have extended the basic MAMP-version with the php source code.
I did this after switching to the corresponding php-version folder.
cd /Applications/MAMP/bin/php/php5.3.6 ./bin/pecl install intl
The installation routine asks you for the location of your ICU installation. I answered /usr/local as location. Adjust this to your setup if you used different paths.
Enable the intl-extension in your php.ini by adding extension=intl.so to it. And don’t forget to restart MAMP.
That should have been it. HTH? Tell me in comments or contact me via twitter @gruzilla if you’re having troubles following this post.
We do not need the files generated in /usr/local/src – but watch out, that you only delete what you created just now. Also the MAMP components in your Downloads folder can obviously be deleted.
Some of my friends encountered problems following the above guide. Here are some hints.
Make sure you installed the MAMP-components correctly. Make sure the file php.h (and all files along with it) are in the correct place. The correct path would be:
/Applications/MAMP/bin/php/php5.3.6/include/php/main/php.h
Of course this might differ if you use another php-version.
Now here we have two possibilities: a) you installed ICU in the wrong directory (wrong –prefix building icu) or b) you gave pecl the wrong answer where your ICU libraries are located.
If you have built the ICU libraries in the wrong location (this happens when you give it some other prefix) you might want to uninstall it first by using make uninstall
.
Ok – there can be multiple reasons to that.
which php
in your terminal to check which binary is used. You can change this by altering your $PATH environment variable. Do not forget to restart your terminal.php -i | grep php.ini
to check which php.ini is loaded by your binary and change correct file.If you encounter any other problems, please let me know in the comments or contact me directly.
]]>just let me say some introductory words: i like +, i’d prefere it to fb, if everybody was on it (not only geeks and fags who think they have to just because its new). you made the point: many features + has are heavily missed features in fb! i somewhere read (not literally) “you use your fb, but you make your +”. + is, for the time being, not a social, but a medial experience to me. i can customize a lot, and that is just purely awesome. my suggestion: let this be your primary rule of concept, because fb, doesn’t let me decide crucial things (don’t know why. money?) and i hate that since feel mature. i tried to think about what is most irritating me about +’s behaviour…
1) want to mute circles.
-> created a “roflwhoareyou”-circle to add people i don’t care about, people i simply do not know (maybe your suggestion-thing did not work properly), still their “totally interesting” posts appear in my stream. this sucks, because i simply don’t want to see them. Now don’t come with “then don’t add them, because then i’d say, ok – just explain me the sense of circles again please.
2) reset circles when writing posts, or let me define the default!
ex:
write message to circle “men”
-> yey! vaginal!
write message to circle “women”
-> oh damn fu, just sent it to “men” because + remembered my last msg went to men
now everybody thinks i’m gay. shit.
3) want to see combined circles stream
-> i only can see one stream or all streams at once. sucks.
4) want to do a full text search on my (filtered) stream.
-> cannot! oh, sorry i forgot: you don’t have any xp in searching…
5) want to add friends to chat without awaiting their confirmation
-> i’ve already added them to a circle i trust, they have done so too, so why do we have to confirm this again? You give a shit about if i add somebody to a circle, and tell everybody about it. Be consistent: if they don’t want to talk to me, they wont. If they don’t want that i can add them without their confirmation then let them decide
6) generally, +defaults are crap. + is the gentoo of social media networks, so let ME (=ma) decide.
-> if i want that somebody can only add me to chat with my confirmation let MA decide (not you, crappy google!)
7) search twitter hashtags in sparks!
-> sparking zendfw doesn’t find anything about Zend Framework although its a common twitter-hashtag. don’t you search twitter, because its not yours? your whole business is built upon others content, so include that.
8 ) show me my online friends in hangouts!
-> hangouts are awesome, but this would just add the finishing touch. besides youtube vids are sometimes not played synchronously.
9) don’t want to see double posts if someone shared a friends post.
-> make something like “shared by XY”, its really annoying if 4 of my friends shared the same message of one of my friends. not only annoying, this is SPAM! (you know that spam is bad for user xp, dont you? google mail *wink*)
10) cannot find people by [any f** social network]-name
-> come on google? you’re the #1 search engine on teh interwebz! don’t tell me you do not know my twitter friends.
[UPDATE] yesterday @mislav contacted me and linked me his api-documentation which is much nicer than mine here you go: https://github.com/mislav/instagram/wiki
The base for all urls is http://instagr.am/api/v1/ – “v1” lets us expect, that they currently develop v2 which will probably be released some time. Normally instagr.am uses gzip as enc-type for communication.
The following list is not complete. You may guess other actions from the url-names, i only tested a few and only some of them made it into my PHP-library.
“pk” stands for primary key i suppose.
Action | URL | Parameters | response |
---|---|---|---|
login | accounts/login/ | username, password, device_id | {“status”:”ok”}{“status”:”failed”, “message”:”some error message”} |
user details | users/[pk]/info/ | – | {“status”: “ok”, “user”: {“username”: “xxx”, “media_count”: 0, “following_count”: 0, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”, “follower_count”: 0, “pk”: pk}} |
post comment | media/[pk]/comment | comment_text | {“comment”: {“media_id”: pk, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”:1} |
change media data | media/configure | device_timestamp=0&caption=xxx&location={“name”:”xxx”,”lng”:0,”lat”:0,”external_id”:0, “address”:”xxx”,”external_source”:”foursquare”} &source_type=0&filter_type=15 | {“status”: “ok”, “media”: {“image_versions”: [{“url”: “xxx.jpg”, “width”: 150, “type”: 5, “height”: 150}, {“url”: “xxx.jpg”, “width”: 306, “type”: 6, “height”: 306}, {“url”: “http://xxx.jpg”, “width”: 612, “type”: 7, “height”: 612}], “code”: “xxx”, “likers”: [], “taken_at”: 0, “location”: {“external_source”: “foursquare”, “name”: “xxx”, “address”: “xxx”, “lat”: 0, “pk”: pk, “lng”: 0, “external_id”: 0}, “filter_type”: “15”, “device_timestamp”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xx.jpg”, “full_name”: “xxx”}, “media_type”: 1, “lat”: 0, “pk”: 0, “lng”: 0, “comments”: [{“media_id”: 0, “text”: “xxx”, “created_at”: 0, “user”: {“username”: “xxx”, “pk”: pk, “profile_pic_url”: “http://xxx.jpg”, “full_name”: “xxx”}, “content_type”: “comment”, “type”: 1}]}} |
like media | media/[pk]/like/ | – | {“status”: “ok”} |
upload media | media/upload | (multipart/form-data) device_timestamp=0 lat=0 lng=0 photo=(binary data)filename=”file” | {“status”: “ok”} |
show friend details | friendships/show/[pk] | – | {“following”: true, “status”: “ok”, “incoming_request”: false, “followed_by”: true, “outgoing_request”: false} |
show own feed | feed/timeline/? | (the last ? may be a number that defines how many elements you want to load, but i haven’t tested it so far) | what you get back is very massive (and would be a repetition), therefore only an overview:
{“status”:”ok”, “items”:[— a list of media-items (similar tothe response to changing a media —],”num_results”:0} |
show feed of a user | feed/user/? | (same as above) | (same as above) |
As i already said, there are definitely more actions that are possible. But i didn’t try everyone. I’m happy with reading from instagr.am, what is what i did at ma instagram.
I hope i can bring this api into a nice form to read in the near future.
Hope that helps someone.
]]>i have to add that i never was a fan of hyped projects, especially if there is much money behind it. for me 200k is so much i even can not imagine it. but i also hate people always talking about money and profit. so i took a look at diaspora as the (maybe typically intended) guy who is able to install and use it.however, after installing diaspora on our ubuntu machine (ok at first it did not work because i still was on hardy, you know, never change a running system and so on) i was dissapointed.
maybe i have to say some words about my background as a software engineer: i worked in a team realizing a software project with +300 features and the budget was much less (more imaginable ). atm its on production level and is used by several clients.
this is not meant to be a comparison at 1st sight, its meant to show the disappointment other devs might feel after checking out the source. with “other devs” i don’t think of me – our project is situated in a completely different area – i think of the devs of Elgg, Japixx or other projects who did not get the same media response and financial support, but still fight for the same cause: enhancing privacy and trust – at least i dont know any sources to prove that they are not heavily funded, which however may be the case.
the idea of a heavily distributed social network is very nice. also the fact that diaspora does not depend on other protocols or on a server infrastructure is the right way i think to achieve the target.
so what did i like about diaspora? i liked the fact that every url included a hash – no nice urls. if some proxy is in the middle or a link is shared no one can make predictions just on the url. i liked the aspects. i liked the simple user interface (…the parts that worked).
the guys at diaspora definitely put much work into this pre-alpha release. in fact they wrote a very detailed installation guide (which would have worked seamlessly if our server would have been up to date). i really like projects explaining how the program can be installed. a post in the discussion group (currently closed for whatever reason) hit the mark: “installed – now where to go”. the guide simply missed this part!
after installing you have no idea what to do next. play around – ok, but as a dev i still have no idea how it is intended to work – is it a bug, that a friend can only added to one aspect or is it a feature? so: no documentation. now to the core of the hole diaspora-hype – the federated architecture. no documentation that explains how to connect two diaspora seeds. just a quick note in the dev-group to take a look at some seed-files. so i took a look at those files. still no idea how this is intended to work because there are no comments explaining anything.
this is the real reason why i am disappointed, because 1st i could not try out the features of the ui due to bugs (which is ok – it still is a pre-alpha) and 2nd because i could not understand how the core is intended to work (without reading millions of lines of code).
so don’t call it real at the moment. you had much work with what you’ve done, why not add 10 more lines explaining what to do next after installing it?
]]>at first i thought twiddling around in the system settings might solve the problem: no way
then i hoped to fix it with apple script (tell the app to position the window somewhere else) – no way, its java you *$”$§ not cocoa
i googled for hours but only found other people having the same problem and no solution.
the easiest way to fix this is by far to drag the window to the primary montior ahead of disconnecting the secondary. 101
well – what do you do if you are on hollidays – every one of us desperately needs jdownloader in his/her hollidays don’t we?!
so what to do?!
i searched for a config file, something similar to a plist or whatever.
and then i found it: database.script
cd /Applications/jDownloader.app/Contents/Resources/Java/config nano database.script
then ctrl+w to find “jdgui” – you will find something like that:
INSERT INTO CONFIG VALUES('jdgui','aced.......
DELETE this line and your done
(don’t forget to restart jdownloader)
HTH
]]>]]>
einmal wie man unter osx snow leopard einen diy webserver einrichtet
und das ergebnis der zusammenarbeit mit johannes(morgenstille turboblog) für ein fach der uni in dem es um “geistigen eigentum” (brrrr mir schaudert bei dem wort) geht.
hf beim lesen…
]]>