Creating dynamic web page with php

Dynamic website in php

Scaling is a fairly convaluted term in modern development because very often «best practices» such as using design patterns, frameworks, database abstraction, and normalization are the root cause of bottlenecks which limit scalability in extremely demanded websites. Try finding the best solution for your site (design patterns, OO best practices) with scalability in mind.

Dynamic website in php

i have created a generic template which should display the content of the parameter being passed in the URL by fetching details from the database.

Ex.
Home page: index.php
Template: Pages.php
link displayed on Home page: » This is fetched from Database »
when clicked it should navigate to: Pages.php/This is fetched from Database (this is the parameter passed to the database to get the content)

This is working properly but when the i click Home (index.php) in Pages.php/This is fetched from Database the same is being treated as Pages.php /index.php but it should go to index.php directly.

I have used htaccess for rewriting the url but it didnt workout.

Basically i am building a dynamic website and my question is how can i effectively use the generic template concept for achieving the dynamic website?

The code i used for getting this Pages.php/This is fetched from Database

query("SELECT Title FROM Posts where Title = $uri"); echo "."' alt = ""> "; ?> 

The relative URL index.php means:

  1. Take the URL of the current page
  2. Remove everything after the path
  3. Remove everything after the last /
  4. Append something to it

If you don’t want to keep everything in the path that is before the last / , then you need to link to /index.php and not index.php .

You can do some search on php design patterns to have better idea about what you try to achieve .

Here is somewhere you can start from :

$uri is a string not an array , so the above line will only get the character on the 4th position , try to remove that line and test .

Create Complete Responsive Dynamic Website in HTML, Welcome, how to create a completely responsive dynamic website using HTML, CSS, PHP , MySQL in Hindi. We will use Bootstrap4 …

Making Dynamic Website With PHP

* NOTE : 1. Use include( dirname(__FILE__) . DIRECTORY_SEPARATOR . ‘my_file. php ‘); this is more secure way to include files.2.[ optional ] use basename($_GET

How to host a Dynamic(PHP) website for FREE

Hey, this is kamal. Today we are going to see how to host a dynamic website for free with the help of infinityfree.netDynamic websites are the ones that are

Create Complete Responsive Dynamic Website in HTML

Welcome, how to create a completely responsive dynamic website using HTML, CSS, PHP , MySQL in Hindi. We will use Bootstrap4 …

Читайте также:  Python итерация вложенных словарей

How to make Dynamic Website High Scalable?

How can we make a dynamic website or a website which is developed in PHP and backend as MySQL a Social Networking Site High Scalable?

Look at the infrastructure of sites like facebook, twitter and youtube on High Scalability. They give you a really nice overview of tools that are out there (most of them are open source and free).

You should probably look into:

  • Reverse Proxying /Load Balancing (squid or varnish)
  • Data caching (memcache and memcached)
  • Possible backend in c++
  • NoSQL (Cassandra, CouchDB, MemcacheDB)

I wrote a post on the topic a couple weeks ago if you’re interested, check it out here.

Following good design and programming practices such as low coupling. Try finding the best solution for your site ( design patterns , OO best practices) with scalability in mind.

I don’t think there’s any silver bullet here, it depends on each case.

Redundancy is quite important too as stillstanding says.

Still, first focus on making the best design possible, and when the time comes, worry about scalability.

That is like asking how to build a skyscraper, the answer is complicated and depends on a lot of things.

But these questions and suggestions should help a lot.

Diagram out your system design. Create Logical Partitions for Services, that can be split over multiple services. Create multiple virtual host names for services that can be logically split , like static image hosting. Determine which services need to be persistent over multiple requests and develop a session location and caching system.

Put SQL Caching in front of your sql requests.

Start thinking about «the cloud» — consider creating «disconnected» virtual images that can talk to each other, and exchange all state information in a structured way, that is server agnostic. When you get this right, you can add more cloud servicing transparently, if you get the design right.

Worry about scaling when you need to worry about scaling, not before. At that point you should know the answer or be able to hire someone who does.

Let me attempt to clarrify this answer a little.

Scaling is a fairly convaluted term in modern development because very often «best practices» such as using design patterns, frameworks , database abstraction, and normalization are the root cause of bottlenecks which limit scalability in extremely demanded websites.

Throwing hardware at software will basically allow you to scale anything to a point.

At that point (where you need to throw more hardware at the problem) you most likely have enough of a user base that you either are making money off of your website/service or in the case of an open source product you’ll likely have a following. These resources will provide you access to higher level computer scientists/engineers which can help with scaling.

Scaling at the level of reducing stress on hardware becomes very anti-«best practices». You begin writing code to do very specific tasks quickly. Take twitter for example, they’ve basically rewritten twitter to no longer use ruby on rails, but instead use scala and designed twitters already successful features around performance alone. Facebook on the other hand has written custom PHP packages in order to compile PHP.

Читайте также:  Python redis update value

Web Scraping with PHP – How to Crawl Web Pages, It’s also called web crawling or web data extraction. PHP is a widely used back-end scripting language for creating dynamic websites and web applications. And you can implement a web scraper using plain PHP code.

How to grab dynamic content on website and save it?

For example I need to grab from http://gmail.com/ the number of free storage:

And then store those numbers in a MySql database. The number, as you can see, is dynamically changing .

Is there a way i can setup a server side script that will be grabbing that number, every time it changes, and saving it to database?

Since Gmail doesn’t provide any API to get this information, it sounds like you want to do some web scraping.

Web scraping (also called Web harvesting or Web data extraction) is a computer software technique of extracting information from websites

There are numerous ways of doing this, as mentioned in the wikipedia article linked before:

Human copy-and-paste: Sometimes even the best Web- scraping technology can not replace human’s manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly setup barriers to prevent machine automation.

Text grepping and regular expression matching: A simple yet powerful approach to extract information from Web pages can be based on the UNIX grep command or regular expression matching facilities of programming languages (for instance Perl or Python).

HTTP programming: Static and dynamic Web pages can be retrieved by posting HTTP requests to the remote Web server using socket programming .

DOM parsing: By embedding a full-fledged Web browser, such as the Internet Explorer or the Mozilla Web browser control, programs can retrieve the dynamic contents generated by client side scripts. These Web browser controls also parse Web pages into a DOM tree, based on which programs can retrieve parts of the Web pages.

HTML parsers: Some semi-structured data query languages, such as the XML query language (XQL) and the hyper-text query language (HTQL), can be used to parse HTML pages and to retrieve and transform Web content.

Web-scraping software: There are many Web-scraping software available that can be used to customize Web-scraping solutions. These software may provide a Web recording interface that removes the necessity to manually write Web-scraping codes, or some scripting functions that can be used to extract and transform Web content, and database interfaces that can store the scraped data in local databases.

Semantic annotation recognizing: The Web pages may embrace metadata or semantic markups/annotations which can be made use of to locate specific data snippets. If the annotations are embedded in the pages, as Microformat does, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer2, are stored and managed separated to the Web pages, so the Web scrapers can retrieve data schema and instructions from this layer before scraping the pages.

And before I continue, please keep in mind the legal implications of all this. I don’t know if it’s compliant with gmail’s terms and I would recommend checking them before moving forward. You might also end up being blacklisted or encounter other issues like this.

Читайте также:  Java get method signatures

All that being said, I’d say that in your case you need some kind of spider and DOM parser to log into gmail and find the data you want. The choice of this tool will depend on your technology stack.

As a ruby dev, I like using Mechanize and nokogiri. Using PHP you could take a look at solutions like sphider .

Initially I thought it was not possible thinking that the number was initialized by javascript.

But if you switch off javascript the number is there in the span tag and probably a javascript function increases it at a regular interval .

So, you can use curl, fopen, etc. to read the contents from the url and then you can parse the contents looking for this value to store it on the datanase. And set this up a cron job to do it on a regular basis.

There are many references on how to do this. Including SO. If you get stuck then just open another question.

Warning: Google have ways of finding out if their apps are being scraped and they will block your IP for a certain period of time. Read the google small print. It’s happened to me.

One way I can see you doing this (which may not be the most efficient way) is to use PHP and YQL (From Yahoo!). With YQL, you can specify the webpage (www.gmail.com) and the XPATH to get you the value inside the span tag. It’s essentially web-scraping but YQL provides you with a nice way to do it using maybe 4-5 lines of code.

You can wrap this whole thing inside a function that gets called every x seconds, or whatever time period you are looking for.

Leaving aside the legality issues in this particular case, I would suggest the following:

Trying to attack something impossible, stop and think where the impossibility comes from, and whether you chose the correct way.

Do you really think that someone in his mind would issue a new http connection or even worse hold an open comet connection to look if the common storage has grown? For an anonimous user? Just look and find a function that computes a value based on some init value and the current time.

How to create Complete Dynamic Website In, How To Make Dynamic Website Using HTML CSS PHP And Mysql Databases.We Will Make The Website From Scratch by installing xampp …

Источник

Оцените статью